00:00:00.001 Started by upstream project "autotest-per-patch" build number 127080 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.028 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.029 The recommended git tool is: git 00:00:00.029 using credential 00000000-0000-0000-0000-000000000002 00:00:00.030 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.046 Fetching changes from the remote Git repository 00:00:00.047 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.074 Using shallow fetch with depth 1 00:00:00.074 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.074 > git --version # timeout=10 00:00:00.094 > git --version # 'git version 2.39.2' 00:00:00.094 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.122 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.122 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.624 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.635 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.646 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:02.646 > git config core.sparsecheckout # timeout=10 00:00:02.657 > git read-tree -mu HEAD # timeout=10 00:00:02.672 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:02.713 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:02.713 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:02.795 [Pipeline] Start of Pipeline 00:00:02.810 [Pipeline] library 00:00:02.811 Loading library shm_lib@master 00:00:02.811 Library shm_lib@master is cached. Copying from home. 00:00:02.829 [Pipeline] node 00:00:02.837 Running on WFP21 in /var/jenkins/workspace/crypto-phy-autotest 00:00:02.839 [Pipeline] { 00:00:02.849 [Pipeline] catchError 00:00:02.851 [Pipeline] { 00:00:02.863 [Pipeline] wrap 00:00:02.874 [Pipeline] { 00:00:02.884 [Pipeline] stage 00:00:02.886 [Pipeline] { (Prologue) 00:00:03.054 [Pipeline] sh 00:00:03.335 + logger -p user.info -t JENKINS-CI 00:00:03.352 [Pipeline] echo 00:00:03.353 Node: WFP21 00:00:03.361 [Pipeline] sh 00:00:03.657 [Pipeline] setCustomBuildProperty 00:00:03.669 [Pipeline] echo 00:00:03.671 Cleanup processes 00:00:03.677 [Pipeline] sh 00:00:03.964 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:03.964 1971375 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:03.977 [Pipeline] sh 00:00:04.257 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:04.257 ++ grep -v 'sudo pgrep' 00:00:04.257 ++ awk '{print $1}' 00:00:04.257 + sudo kill -9 00:00:04.257 + true 00:00:04.269 [Pipeline] cleanWs 00:00:04.277 [WS-CLEANUP] Deleting project workspace... 00:00:04.277 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.283 [WS-CLEANUP] done 00:00:04.286 [Pipeline] setCustomBuildProperty 00:00:04.296 [Pipeline] sh 00:00:04.574 + sudo git config --global --replace-all safe.directory '*' 00:00:04.639 [Pipeline] httpRequest 00:00:04.654 [Pipeline] echo 00:00:04.655 Sorcerer 10.211.164.101 is alive 00:00:04.662 [Pipeline] httpRequest 00:00:04.666 HttpMethod: GET 00:00:04.666 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:04.667 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:04.669 Response Code: HTTP/1.1 200 OK 00:00:04.669 Success: Status code 200 is in the accepted range: 200,404 00:00:04.670 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:05.025 [Pipeline] sh 00:00:05.302 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:05.317 [Pipeline] httpRequest 00:00:05.336 [Pipeline] echo 00:00:05.337 Sorcerer 10.211.164.101 is alive 00:00:05.345 [Pipeline] httpRequest 00:00:05.350 HttpMethod: GET 00:00:05.350 URL: http://10.211.164.101/packages/spdk_23a08191916c4348e4a1a2d4e0f4bfa04b40d2df.tar.gz 00:00:05.351 Sending request to url: http://10.211.164.101/packages/spdk_23a08191916c4348e4a1a2d4e0f4bfa04b40d2df.tar.gz 00:00:05.357 Response Code: HTTP/1.1 200 OK 00:00:05.358 Success: Status code 200 is in the accepted range: 200,404 00:00:05.359 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_23a08191916c4348e4a1a2d4e0f4bfa04b40d2df.tar.gz 00:00:35.955 [Pipeline] sh 00:00:36.243 + tar --no-same-owner -xf spdk_23a08191916c4348e4a1a2d4e0f4bfa04b40d2df.tar.gz 00:00:38.796 [Pipeline] sh 00:00:39.090 + git -C spdk log --oneline -n5 00:00:39.090 23a081919 python/rpc: Python rpc docs generator. 00:00:39.090 8d38a7da8 python/rpc: Replace jsonrpc.md with generated docs 00:00:39.091 ee633e585 rpc.py: access bdev rpcs directly from rpc module 00:00:39.091 6f18624d4 python/rpc: Python rpc call generator. 00:00:39.091 da8d49b2f python/rpc: Replace bdev.py with generated rpc's 00:00:39.100 [Pipeline] } 00:00:39.109 [Pipeline] // stage 00:00:39.115 [Pipeline] stage 00:00:39.116 [Pipeline] { (Prepare) 00:00:39.126 [Pipeline] writeFile 00:00:39.139 [Pipeline] sh 00:00:39.417 + logger -p user.info -t JENKINS-CI 00:00:39.435 [Pipeline] sh 00:00:39.730 + logger -p user.info -t JENKINS-CI 00:00:39.750 [Pipeline] sh 00:00:40.036 + cat autorun-spdk.conf 00:00:40.036 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.036 SPDK_TEST_BLOCKDEV=1 00:00:40.036 SPDK_TEST_ISAL=1 00:00:40.036 SPDK_TEST_CRYPTO=1 00:00:40.036 SPDK_TEST_REDUCE=1 00:00:40.036 SPDK_TEST_VBDEV_COMPRESS=1 00:00:40.036 SPDK_RUN_UBSAN=1 00:00:40.036 SPDK_TEST_ACCEL=1 00:00:40.044 RUN_NIGHTLY=0 00:00:40.048 [Pipeline] readFile 00:00:40.072 [Pipeline] withEnv 00:00:40.075 [Pipeline] { 00:00:40.089 [Pipeline] sh 00:00:40.376 + set -ex 00:00:40.376 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:00:40.376 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:40.376 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.376 ++ SPDK_TEST_BLOCKDEV=1 00:00:40.376 ++ SPDK_TEST_ISAL=1 00:00:40.376 ++ SPDK_TEST_CRYPTO=1 00:00:40.376 ++ SPDK_TEST_REDUCE=1 00:00:40.376 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:40.376 ++ SPDK_RUN_UBSAN=1 00:00:40.376 ++ SPDK_TEST_ACCEL=1 00:00:40.376 ++ RUN_NIGHTLY=0 00:00:40.376 + case $SPDK_TEST_NVMF_NICS in 00:00:40.376 + DRIVERS= 00:00:40.376 + [[ -n '' ]] 00:00:40.376 + exit 0 00:00:40.386 [Pipeline] } 00:00:40.405 [Pipeline] // withEnv 00:00:40.412 [Pipeline] } 00:00:40.430 [Pipeline] // stage 00:00:40.440 [Pipeline] catchError 00:00:40.442 [Pipeline] { 00:00:40.459 [Pipeline] timeout 00:00:40.459 Timeout set to expire in 1 hr 0 min 00:00:40.461 [Pipeline] { 00:00:40.478 [Pipeline] stage 00:00:40.480 [Pipeline] { (Tests) 00:00:40.499 [Pipeline] sh 00:00:40.785 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:00:40.785 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:00:40.785 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:00:40.785 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:00:40.785 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:40.785 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:00:40.785 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:00:40.785 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:40.785 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:00:40.785 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:40.785 + [[ crypto-phy-autotest == pkgdep-* ]] 00:00:40.785 + cd /var/jenkins/workspace/crypto-phy-autotest 00:00:40.785 + source /etc/os-release 00:00:40.785 ++ NAME='Fedora Linux' 00:00:40.785 ++ VERSION='38 (Cloud Edition)' 00:00:40.785 ++ ID=fedora 00:00:40.785 ++ VERSION_ID=38 00:00:40.785 ++ VERSION_CODENAME= 00:00:40.785 ++ PLATFORM_ID=platform:f38 00:00:40.785 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:40.785 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:40.785 ++ LOGO=fedora-logo-icon 00:00:40.785 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:40.785 ++ HOME_URL=https://fedoraproject.org/ 00:00:40.785 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:40.785 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:40.785 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:40.785 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:40.785 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:40.785 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:40.785 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:40.785 ++ SUPPORT_END=2024-05-14 00:00:40.785 ++ VARIANT='Cloud Edition' 00:00:40.785 ++ VARIANT_ID=cloud 00:00:40.785 + uname -a 00:00:40.785 Linux spdk-wfp-21 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:40.785 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:00:44.077 Hugepages 00:00:44.077 node hugesize free / total 00:00:44.077 node0 1048576kB 0 / 0 00:00:44.077 node0 2048kB 0 / 0 00:00:44.077 node1 1048576kB 0 / 0 00:00:44.077 node1 2048kB 0 / 0 00:00:44.077 00:00:44.077 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:44.077 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:44.077 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:44.077 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:44.077 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:44.077 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:44.077 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:44.077 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:44.077 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:44.077 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:44.077 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:44.077 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:44.077 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:44.077 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:44.077 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:44.077 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:44.077 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:44.337 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:44.337 + rm -f /tmp/spdk-ld-path 00:00:44.337 + source autorun-spdk.conf 00:00:44.337 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.337 ++ SPDK_TEST_BLOCKDEV=1 00:00:44.337 ++ SPDK_TEST_ISAL=1 00:00:44.337 ++ SPDK_TEST_CRYPTO=1 00:00:44.337 ++ SPDK_TEST_REDUCE=1 00:00:44.337 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:44.337 ++ SPDK_RUN_UBSAN=1 00:00:44.337 ++ SPDK_TEST_ACCEL=1 00:00:44.337 ++ RUN_NIGHTLY=0 00:00:44.337 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:44.337 + [[ -n '' ]] 00:00:44.337 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:44.337 + for M in /var/spdk/build-*-manifest.txt 00:00:44.337 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:44.337 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:44.337 + for M in /var/spdk/build-*-manifest.txt 00:00:44.337 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:44.337 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:44.337 ++ uname 00:00:44.337 + [[ Linux == \L\i\n\u\x ]] 00:00:44.337 + sudo dmesg -T 00:00:44.337 + sudo dmesg --clear 00:00:44.337 + dmesg_pid=1972439 00:00:44.337 + [[ Fedora Linux == FreeBSD ]] 00:00:44.337 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:44.337 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:44.337 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:44.337 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:44.337 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:44.337 + [[ -x /usr/src/fio-static/fio ]] 00:00:44.337 + export FIO_BIN=/usr/src/fio-static/fio 00:00:44.337 + FIO_BIN=/usr/src/fio-static/fio 00:00:44.337 + sudo dmesg -Tw 00:00:44.337 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:44.337 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:44.337 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:44.337 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:44.337 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:44.337 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:44.337 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:44.337 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:44.337 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:44.337 Test configuration: 00:00:44.337 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.337 SPDK_TEST_BLOCKDEV=1 00:00:44.337 SPDK_TEST_ISAL=1 00:00:44.337 SPDK_TEST_CRYPTO=1 00:00:44.337 SPDK_TEST_REDUCE=1 00:00:44.337 SPDK_TEST_VBDEV_COMPRESS=1 00:00:44.337 SPDK_RUN_UBSAN=1 00:00:44.337 SPDK_TEST_ACCEL=1 00:00:44.598 RUN_NIGHTLY=0 18:03:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:00:44.598 18:03:52 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:44.598 18:03:52 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:44.598 18:03:52 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:44.598 18:03:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.598 18:03:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.598 18:03:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.598 18:03:52 -- paths/export.sh@5 -- $ export PATH 00:00:44.598 18:03:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:44.598 18:03:52 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:00:44.598 18:03:52 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:44.598 18:03:52 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721837032.XXXXXX 00:00:44.598 18:03:53 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721837032.pgTvI0 00:00:44.598 18:03:53 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:44.598 18:03:53 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:44.598 18:03:53 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:00:44.598 18:03:53 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:44.598 18:03:53 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:44.598 18:03:53 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:44.598 18:03:53 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:44.598 18:03:53 -- common/autotest_common.sh@10 -- $ set +x 00:00:44.598 18:03:53 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:00:44.598 18:03:53 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:44.598 18:03:53 -- pm/common@17 -- $ local monitor 00:00:44.598 18:03:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.598 18:03:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.598 18:03:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.598 18:03:53 -- pm/common@21 -- $ date +%s 00:00:44.598 18:03:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:44.598 18:03:53 -- pm/common@21 -- $ date +%s 00:00:44.598 18:03:53 -- pm/common@25 -- $ sleep 1 00:00:44.598 18:03:53 -- pm/common@21 -- $ date +%s 00:00:44.598 18:03:53 -- pm/common@21 -- $ date +%s 00:00:44.598 18:03:53 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721837033 00:00:44.598 Traceback (most recent call last): 00:00:44.598 File "/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py", line 24, in 00:00:44.598 import spdk.rpc as rpc # noqa 00:00:44.598 ^^^^^^^^^^^^^^^^^^^^^^ 00:00:44.598 File "/var/jenkins/workspace/crypto-phy-autotest/spdk/python/spdk/rpc/__init__.py", line 13, in 00:00:44.598 from . import bdev 00:00:44.598 File "/var/jenkins/workspace/crypto-phy-autotest/spdk/python/spdk/rpc/bdev.py", line 8, in 00:00:44.598 from spdk.rpc.rpc import * 00:00:44.598 ModuleNotFoundError: No module named 'spdk.rpc.rpc' 00:00:44.598 18:03:53 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721837033 00:00:44.598 18:03:53 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721837033 00:00:44.598 18:03:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721837033 00:00:44.598 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721837033_collect-vmstat.pm.log 00:00:44.598 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721837033_collect-cpu-load.pm.log 00:00:44.598 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721837033_collect-cpu-temp.pm.log 00:00:44.598 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721837033_collect-bmc-pm.bmc.pm.log 00:00:45.536 18:03:54 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:45.536 18:03:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:45.536 18:03:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:45.536 18:03:54 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:45.536 18:03:54 -- spdk/autobuild.sh@16 -- $ date -u 00:00:45.536 Wed Jul 24 04:03:54 PM UTC 2024 00:00:45.536 18:03:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:45.536 v24.09-pre-316-g23a081919 00:00:45.536 18:03:54 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:45.536 18:03:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:45.536 18:03:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:45.536 18:03:54 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:45.536 18:03:54 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:45.536 18:03:54 -- common/autotest_common.sh@10 -- $ set +x 00:00:45.795 ************************************ 00:00:45.795 START TEST ubsan 00:00:45.795 ************************************ 00:00:45.795 18:03:54 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:45.795 using ubsan 00:00:45.795 00:00:45.795 real 0m0.001s 00:00:45.795 user 0m0.001s 00:00:45.795 sys 0m0.000s 00:00:45.795 18:03:54 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:45.795 18:03:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:45.795 ************************************ 00:00:45.795 END TEST ubsan 00:00:45.795 ************************************ 00:00:45.795 18:03:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:45.795 18:03:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:45.795 18:03:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:45.795 18:03:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:45.795 18:03:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:45.795 18:03:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:45.795 18:03:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:45.795 18:03:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:45.795 18:03:54 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:45.795 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:00:45.795 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:00:46.363 Using 'verbs' RDMA provider 00:00:59.516 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:14.413 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:14.413 Creating mk/config.mk...done. 00:01:14.413 Creating mk/cc.flags.mk...done. 00:01:14.413 Type 'make' to build. 00:01:14.413 18:04:21 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:14.413 18:04:21 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:14.413 18:04:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:14.413 18:04:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.413 ************************************ 00:01:14.413 START TEST make 00:01:14.413 ************************************ 00:01:14.413 18:04:21 make -- common/autotest_common.sh@1125 -- $ make -j112 00:01:41.022 The Meson build system 00:01:41.022 Version: 1.3.1 00:01:41.022 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:01:41.022 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:01:41.022 Build type: native build 00:01:41.022 Program cat found: YES (/usr/bin/cat) 00:01:41.022 Project name: DPDK 00:01:41.022 Project version: 24.03.0 00:01:41.022 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:41.022 C linker for the host machine: cc ld.bfd 2.39-16 00:01:41.022 Host machine cpu family: x86_64 00:01:41.022 Host machine cpu: x86_64 00:01:41.022 Message: ## Building in Developer Mode ## 00:01:41.022 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:41.022 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:41.022 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:41.022 Program python3 found: YES (/usr/bin/python3) 00:01:41.022 Program cat found: YES (/usr/bin/cat) 00:01:41.022 Compiler for C supports arguments -march=native: YES 00:01:41.022 Checking for size of "void *" : 8 00:01:41.022 Checking for size of "void *" : 8 (cached) 00:01:41.022 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:41.022 Library m found: YES 00:01:41.022 Library numa found: YES 00:01:41.022 Has header "numaif.h" : YES 00:01:41.022 Library fdt found: NO 00:01:41.022 Library execinfo found: NO 00:01:41.022 Has header "execinfo.h" : YES 00:01:41.022 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:41.022 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:41.022 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:41.022 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:41.022 Run-time dependency openssl found: YES 3.0.9 00:01:41.022 Run-time dependency libpcap found: YES 1.10.4 00:01:41.022 Has header "pcap.h" with dependency libpcap: YES 00:01:41.022 Compiler for C supports arguments -Wcast-qual: YES 00:01:41.022 Compiler for C supports arguments -Wdeprecated: YES 00:01:41.022 Compiler for C supports arguments -Wformat: YES 00:01:41.022 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:41.022 Compiler for C supports arguments -Wformat-security: NO 00:01:41.022 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:41.022 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:41.022 Compiler for C supports arguments -Wnested-externs: YES 00:01:41.022 Compiler for C supports arguments -Wold-style-definition: YES 00:01:41.022 Compiler for C supports arguments -Wpointer-arith: YES 00:01:41.022 Compiler for C supports arguments -Wsign-compare: YES 00:01:41.022 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:41.022 Compiler for C supports arguments -Wundef: YES 00:01:41.022 Compiler for C supports arguments -Wwrite-strings: YES 00:01:41.022 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:41.022 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:41.022 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:41.022 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:41.022 Program objdump found: YES (/usr/bin/objdump) 00:01:41.022 Compiler for C supports arguments -mavx512f: YES 00:01:41.022 Checking if "AVX512 checking" compiles: YES 00:01:41.022 Fetching value of define "__SSE4_2__" : 1 00:01:41.022 Fetching value of define "__AES__" : 1 00:01:41.022 Fetching value of define "__AVX__" : 1 00:01:41.022 Fetching value of define "__AVX2__" : 1 00:01:41.022 Fetching value of define "__AVX512BW__" : 1 00:01:41.022 Fetching value of define "__AVX512CD__" : 1 00:01:41.022 Fetching value of define "__AVX512DQ__" : 1 00:01:41.022 Fetching value of define "__AVX512F__" : 1 00:01:41.022 Fetching value of define "__AVX512VL__" : 1 00:01:41.022 Fetching value of define "__PCLMUL__" : 1 00:01:41.022 Fetching value of define "__RDRND__" : 1 00:01:41.022 Fetching value of define "__RDSEED__" : 1 00:01:41.023 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:41.023 Fetching value of define "__znver1__" : (undefined) 00:01:41.023 Fetching value of define "__znver2__" : (undefined) 00:01:41.023 Fetching value of define "__znver3__" : (undefined) 00:01:41.023 Fetching value of define "__znver4__" : (undefined) 00:01:41.023 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:41.023 Message: lib/log: Defining dependency "log" 00:01:41.023 Message: lib/kvargs: Defining dependency "kvargs" 00:01:41.023 Message: lib/telemetry: Defining dependency "telemetry" 00:01:41.023 Checking for function "getentropy" : NO 00:01:41.023 Message: lib/eal: Defining dependency "eal" 00:01:41.023 Message: lib/ring: Defining dependency "ring" 00:01:41.023 Message: lib/rcu: Defining dependency "rcu" 00:01:41.023 Message: lib/mempool: Defining dependency "mempool" 00:01:41.023 Message: lib/mbuf: Defining dependency "mbuf" 00:01:41.023 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:41.023 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:41.023 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:41.023 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:41.023 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:41.023 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:41.023 Compiler for C supports arguments -mpclmul: YES 00:01:41.023 Compiler for C supports arguments -maes: YES 00:01:41.023 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:41.023 Compiler for C supports arguments -mavx512bw: YES 00:01:41.023 Compiler for C supports arguments -mavx512dq: YES 00:01:41.023 Compiler for C supports arguments -mavx512vl: YES 00:01:41.023 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:41.023 Compiler for C supports arguments -mavx2: YES 00:01:41.023 Compiler for C supports arguments -mavx: YES 00:01:41.023 Message: lib/net: Defining dependency "net" 00:01:41.023 Message: lib/meter: Defining dependency "meter" 00:01:41.023 Message: lib/ethdev: Defining dependency "ethdev" 00:01:41.023 Message: lib/pci: Defining dependency "pci" 00:01:41.023 Message: lib/cmdline: Defining dependency "cmdline" 00:01:41.023 Message: lib/hash: Defining dependency "hash" 00:01:41.023 Message: lib/timer: Defining dependency "timer" 00:01:41.023 Message: lib/compressdev: Defining dependency "compressdev" 00:01:41.023 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:41.023 Message: lib/dmadev: Defining dependency "dmadev" 00:01:41.023 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:41.023 Message: lib/power: Defining dependency "power" 00:01:41.023 Message: lib/reorder: Defining dependency "reorder" 00:01:41.023 Message: lib/security: Defining dependency "security" 00:01:41.023 Has header "linux/userfaultfd.h" : YES 00:01:41.023 Has header "linux/vduse.h" : YES 00:01:41.023 Message: lib/vhost: Defining dependency "vhost" 00:01:41.023 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:41.023 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:01:41.023 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:41.023 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:41.023 Compiler for C supports arguments -std=c11: YES 00:01:41.023 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:01:41.023 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:01:41.023 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:01:41.023 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:01:41.023 Run-time dependency libmlx5 found: YES 1.24.44.0 00:01:41.023 Run-time dependency libibverbs found: YES 1.14.44.0 00:01:41.023 Library mtcr_ul found: NO 00:01:41.023 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:01:41.023 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:01:41.023 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:01:41.023 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:01:41.023 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:01:41.023 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:01:41.023 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:01:41.023 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:01:41.023 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:01:41.023 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:01:41.023 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:01:41.023 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:01:41.023 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:01:41.023 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:01:41.023 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:01:44.317 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:01:44.317 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:01:44.317 Configuring mlx5_autoconf.h using configuration 00:01:44.317 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:01:44.317 Run-time dependency libcrypto found: YES 3.0.9 00:01:44.317 Library IPSec_MB found: YES 00:01:44.317 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:01:44.317 Message: drivers/common/qat: Defining dependency "common_qat" 00:01:44.317 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:44.317 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:44.317 Library IPSec_MB found: YES 00:01:44.317 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:01:44.317 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:01:44.317 Compiler for C supports arguments -std=c11: YES (cached) 00:01:44.317 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:44.317 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:44.317 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:44.317 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:44.317 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:01:44.318 Run-time dependency libisal found: NO (tried pkgconfig) 00:01:44.318 Library libisal found: NO 00:01:44.318 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:01:44.318 Compiler for C supports arguments -std=c11: YES (cached) 00:01:44.318 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:44.318 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:44.318 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:44.318 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:44.318 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:01:44.318 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:44.318 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:44.318 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:44.318 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:44.318 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:44.318 Program doxygen found: YES (/usr/bin/doxygen) 00:01:44.318 Configuring doxy-api-html.conf using configuration 00:01:44.318 Configuring doxy-api-man.conf using configuration 00:01:44.318 Program mandb found: YES (/usr/bin/mandb) 00:01:44.318 Program sphinx-build found: NO 00:01:44.318 Configuring rte_build_config.h using configuration 00:01:44.318 Message: 00:01:44.318 ================= 00:01:44.318 Applications Enabled 00:01:44.318 ================= 00:01:44.318 00:01:44.318 apps: 00:01:44.318 00:01:44.318 00:01:44.318 Message: 00:01:44.318 ================= 00:01:44.318 Libraries Enabled 00:01:44.318 ================= 00:01:44.318 00:01:44.318 libs: 00:01:44.318 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:44.318 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:44.318 cryptodev, dmadev, power, reorder, security, vhost, 00:01:44.318 00:01:44.318 Message: 00:01:44.318 =============== 00:01:44.318 Drivers Enabled 00:01:44.318 =============== 00:01:44.318 00:01:44.318 common: 00:01:44.318 mlx5, qat, 00:01:44.318 bus: 00:01:44.318 auxiliary, pci, vdev, 00:01:44.318 mempool: 00:01:44.318 ring, 00:01:44.318 dma: 00:01:44.318 00:01:44.318 net: 00:01:44.318 00:01:44.318 crypto: 00:01:44.318 ipsec_mb, mlx5, 00:01:44.318 compress: 00:01:44.318 isal, mlx5, 00:01:44.318 vdpa: 00:01:44.318 00:01:44.318 00:01:44.318 Message: 00:01:44.318 ================= 00:01:44.318 Content Skipped 00:01:44.318 ================= 00:01:44.318 00:01:44.318 apps: 00:01:44.318 dumpcap: explicitly disabled via build config 00:01:44.318 graph: explicitly disabled via build config 00:01:44.318 pdump: explicitly disabled via build config 00:01:44.318 proc-info: explicitly disabled via build config 00:01:44.318 test-acl: explicitly disabled via build config 00:01:44.318 test-bbdev: explicitly disabled via build config 00:01:44.318 test-cmdline: explicitly disabled via build config 00:01:44.318 test-compress-perf: explicitly disabled via build config 00:01:44.318 test-crypto-perf: explicitly disabled via build config 00:01:44.318 test-dma-perf: explicitly disabled via build config 00:01:44.318 test-eventdev: explicitly disabled via build config 00:01:44.318 test-fib: explicitly disabled via build config 00:01:44.318 test-flow-perf: explicitly disabled via build config 00:01:44.318 test-gpudev: explicitly disabled via build config 00:01:44.318 test-mldev: explicitly disabled via build config 00:01:44.318 test-pipeline: explicitly disabled via build config 00:01:44.318 test-pmd: explicitly disabled via build config 00:01:44.318 test-regex: explicitly disabled via build config 00:01:44.318 test-sad: explicitly disabled via build config 00:01:44.318 test-security-perf: explicitly disabled via build config 00:01:44.318 00:01:44.318 libs: 00:01:44.318 argparse: explicitly disabled via build config 00:01:44.318 metrics: explicitly disabled via build config 00:01:44.318 acl: explicitly disabled via build config 00:01:44.318 bbdev: explicitly disabled via build config 00:01:44.318 bitratestats: explicitly disabled via build config 00:01:44.318 bpf: explicitly disabled via build config 00:01:44.318 cfgfile: explicitly disabled via build config 00:01:44.318 distributor: explicitly disabled via build config 00:01:44.318 efd: explicitly disabled via build config 00:01:44.318 eventdev: explicitly disabled via build config 00:01:44.318 dispatcher: explicitly disabled via build config 00:01:44.318 gpudev: explicitly disabled via build config 00:01:44.318 gro: explicitly disabled via build config 00:01:44.318 gso: explicitly disabled via build config 00:01:44.318 ip_frag: explicitly disabled via build config 00:01:44.318 jobstats: explicitly disabled via build config 00:01:44.318 latencystats: explicitly disabled via build config 00:01:44.318 lpm: explicitly disabled via build config 00:01:44.318 member: explicitly disabled via build config 00:01:44.318 pcapng: explicitly disabled via build config 00:01:44.318 rawdev: explicitly disabled via build config 00:01:44.318 regexdev: explicitly disabled via build config 00:01:44.318 mldev: explicitly disabled via build config 00:01:44.318 rib: explicitly disabled via build config 00:01:44.318 sched: explicitly disabled via build config 00:01:44.318 stack: explicitly disabled via build config 00:01:44.318 ipsec: explicitly disabled via build config 00:01:44.318 pdcp: explicitly disabled via build config 00:01:44.318 fib: explicitly disabled via build config 00:01:44.318 port: explicitly disabled via build config 00:01:44.318 pdump: explicitly disabled via build config 00:01:44.318 table: explicitly disabled via build config 00:01:44.318 pipeline: explicitly disabled via build config 00:01:44.318 graph: explicitly disabled via build config 00:01:44.318 node: explicitly disabled via build config 00:01:44.318 00:01:44.318 drivers: 00:01:44.318 common/cpt: not in enabled drivers build config 00:01:44.318 common/dpaax: not in enabled drivers build config 00:01:44.318 common/iavf: not in enabled drivers build config 00:01:44.318 common/idpf: not in enabled drivers build config 00:01:44.318 common/ionic: not in enabled drivers build config 00:01:44.318 common/mvep: not in enabled drivers build config 00:01:44.318 common/octeontx: not in enabled drivers build config 00:01:44.318 bus/cdx: not in enabled drivers build config 00:01:44.318 bus/dpaa: not in enabled drivers build config 00:01:44.318 bus/fslmc: not in enabled drivers build config 00:01:44.318 bus/ifpga: not in enabled drivers build config 00:01:44.318 bus/platform: not in enabled drivers build config 00:01:44.318 bus/uacce: not in enabled drivers build config 00:01:44.318 bus/vmbus: not in enabled drivers build config 00:01:44.318 common/cnxk: not in enabled drivers build config 00:01:44.318 common/nfp: not in enabled drivers build config 00:01:44.318 common/nitrox: not in enabled drivers build config 00:01:44.318 common/sfc_efx: not in enabled drivers build config 00:01:44.318 mempool/bucket: not in enabled drivers build config 00:01:44.318 mempool/cnxk: not in enabled drivers build config 00:01:44.318 mempool/dpaa: not in enabled drivers build config 00:01:44.318 mempool/dpaa2: not in enabled drivers build config 00:01:44.318 mempool/octeontx: not in enabled drivers build config 00:01:44.318 mempool/stack: not in enabled drivers build config 00:01:44.318 dma/cnxk: not in enabled drivers build config 00:01:44.318 dma/dpaa: not in enabled drivers build config 00:01:44.318 dma/dpaa2: not in enabled drivers build config 00:01:44.318 dma/hisilicon: not in enabled drivers build config 00:01:44.318 dma/idxd: not in enabled drivers build config 00:01:44.318 dma/ioat: not in enabled drivers build config 00:01:44.318 dma/skeleton: not in enabled drivers build config 00:01:44.318 net/af_packet: not in enabled drivers build config 00:01:44.318 net/af_xdp: not in enabled drivers build config 00:01:44.318 net/ark: not in enabled drivers build config 00:01:44.318 net/atlantic: not in enabled drivers build config 00:01:44.318 net/avp: not in enabled drivers build config 00:01:44.318 net/axgbe: not in enabled drivers build config 00:01:44.318 net/bnx2x: not in enabled drivers build config 00:01:44.318 net/bnxt: not in enabled drivers build config 00:01:44.318 net/bonding: not in enabled drivers build config 00:01:44.318 net/cnxk: not in enabled drivers build config 00:01:44.318 net/cpfl: not in enabled drivers build config 00:01:44.318 net/cxgbe: not in enabled drivers build config 00:01:44.318 net/dpaa: not in enabled drivers build config 00:01:44.318 net/dpaa2: not in enabled drivers build config 00:01:44.318 net/e1000: not in enabled drivers build config 00:01:44.318 net/ena: not in enabled drivers build config 00:01:44.318 net/enetc: not in enabled drivers build config 00:01:44.318 net/enetfec: not in enabled drivers build config 00:01:44.318 net/enic: not in enabled drivers build config 00:01:44.318 net/failsafe: not in enabled drivers build config 00:01:44.318 net/fm10k: not in enabled drivers build config 00:01:44.318 net/gve: not in enabled drivers build config 00:01:44.318 net/hinic: not in enabled drivers build config 00:01:44.318 net/hns3: not in enabled drivers build config 00:01:44.318 net/i40e: not in enabled drivers build config 00:01:44.318 net/iavf: not in enabled drivers build config 00:01:44.318 net/ice: not in enabled drivers build config 00:01:44.318 net/idpf: not in enabled drivers build config 00:01:44.318 net/igc: not in enabled drivers build config 00:01:44.318 net/ionic: not in enabled drivers build config 00:01:44.318 net/ipn3ke: not in enabled drivers build config 00:01:44.318 net/ixgbe: not in enabled drivers build config 00:01:44.318 net/mana: not in enabled drivers build config 00:01:44.318 net/memif: not in enabled drivers build config 00:01:44.319 net/mlx4: not in enabled drivers build config 00:01:44.319 net/mlx5: not in enabled drivers build config 00:01:44.319 net/mvneta: not in enabled drivers build config 00:01:44.319 net/mvpp2: not in enabled drivers build config 00:01:44.319 net/netvsc: not in enabled drivers build config 00:01:44.319 net/nfb: not in enabled drivers build config 00:01:44.319 net/nfp: not in enabled drivers build config 00:01:44.319 net/ngbe: not in enabled drivers build config 00:01:44.319 net/null: not in enabled drivers build config 00:01:44.319 net/octeontx: not in enabled drivers build config 00:01:44.319 net/octeon_ep: not in enabled drivers build config 00:01:44.319 net/pcap: not in enabled drivers build config 00:01:44.319 net/pfe: not in enabled drivers build config 00:01:44.319 net/qede: not in enabled drivers build config 00:01:44.319 net/ring: not in enabled drivers build config 00:01:44.319 net/sfc: not in enabled drivers build config 00:01:44.319 net/softnic: not in enabled drivers build config 00:01:44.319 net/tap: not in enabled drivers build config 00:01:44.319 net/thunderx: not in enabled drivers build config 00:01:44.319 net/txgbe: not in enabled drivers build config 00:01:44.319 net/vdev_netvsc: not in enabled drivers build config 00:01:44.319 net/vhost: not in enabled drivers build config 00:01:44.319 net/virtio: not in enabled drivers build config 00:01:44.319 net/vmxnet3: not in enabled drivers build config 00:01:44.319 raw/*: missing internal dependency, "rawdev" 00:01:44.319 crypto/armv8: not in enabled drivers build config 00:01:44.319 crypto/bcmfs: not in enabled drivers build config 00:01:44.319 crypto/caam_jr: not in enabled drivers build config 00:01:44.319 crypto/ccp: not in enabled drivers build config 00:01:44.319 crypto/cnxk: not in enabled drivers build config 00:01:44.319 crypto/dpaa_sec: not in enabled drivers build config 00:01:44.319 crypto/dpaa2_sec: not in enabled drivers build config 00:01:44.319 crypto/mvsam: not in enabled drivers build config 00:01:44.319 crypto/nitrox: not in enabled drivers build config 00:01:44.319 crypto/null: not in enabled drivers build config 00:01:44.319 crypto/octeontx: not in enabled drivers build config 00:01:44.319 crypto/openssl: not in enabled drivers build config 00:01:44.319 crypto/scheduler: not in enabled drivers build config 00:01:44.319 crypto/uadk: not in enabled drivers build config 00:01:44.319 crypto/virtio: not in enabled drivers build config 00:01:44.319 compress/nitrox: not in enabled drivers build config 00:01:44.319 compress/octeontx: not in enabled drivers build config 00:01:44.319 compress/zlib: not in enabled drivers build config 00:01:44.319 regex/*: missing internal dependency, "regexdev" 00:01:44.319 ml/*: missing internal dependency, "mldev" 00:01:44.319 vdpa/ifc: not in enabled drivers build config 00:01:44.319 vdpa/mlx5: not in enabled drivers build config 00:01:44.319 vdpa/nfp: not in enabled drivers build config 00:01:44.319 vdpa/sfc: not in enabled drivers build config 00:01:44.319 event/*: missing internal dependency, "eventdev" 00:01:44.319 baseband/*: missing internal dependency, "bbdev" 00:01:44.319 gpu/*: missing internal dependency, "gpudev" 00:01:44.319 00:01:44.319 00:01:44.319 Build targets in project: 115 00:01:44.319 00:01:44.319 DPDK 24.03.0 00:01:44.319 00:01:44.319 User defined options 00:01:44.319 buildtype : debug 00:01:44.319 default_library : shared 00:01:44.319 libdir : lib 00:01:44.319 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:44.319 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:01:44.319 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:01:44.319 cpu_instruction_set: native 00:01:44.319 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:44.319 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:44.319 enable_docs : false 00:01:44.319 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:01:44.319 enable_kmods : false 00:01:44.319 max_lcores : 128 00:01:44.319 tests : false 00:01:44.319 00:01:44.319 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.898 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:01:44.898 [1/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:44.898 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:44.898 [3/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:44.898 [4/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:44.898 [5/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:44.898 [6/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:44.898 [7/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:44.898 [8/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:44.898 [9/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:44.898 [10/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:44.898 [11/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:44.898 [12/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:44.898 [13/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:44.898 [14/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:45.160 [15/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:45.160 [16/378] Linking static target lib/librte_kvargs.a 00:01:45.160 [17/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:45.160 [18/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:45.160 [19/378] Linking static target lib/librte_log.a 00:01:45.160 [20/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:45.160 [21/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:45.160 [22/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:45.160 [23/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:45.160 [24/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:45.160 [25/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:45.160 [26/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:45.160 [27/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:45.160 [28/378] Linking static target lib/librte_pci.a 00:01:45.160 [29/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:45.160 [30/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:45.160 [31/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:45.160 [32/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:45.421 [33/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:45.421 [34/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:45.421 [35/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:45.421 [36/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:45.421 [37/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:45.421 [38/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:45.421 [39/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:45.421 [40/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:45.421 [41/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:45.421 [42/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:45.421 [43/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:45.421 [44/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:45.421 [45/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:45.421 [46/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:45.421 [47/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:45.421 [48/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:45.421 [49/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.421 [50/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:45.421 [51/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:45.421 [52/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:45.421 [53/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:45.421 [54/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:45.421 [55/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:45.421 [56/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:45.421 [57/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:45.421 [58/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:45.421 [59/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:45.421 [60/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:45.421 [61/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:45.421 [62/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:45.421 [63/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:45.421 [64/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:45.421 [65/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:45.421 [66/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:45.687 [67/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:45.687 [68/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:45.687 [69/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:45.687 [70/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.687 [71/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:45.687 [72/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:45.687 [73/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:45.687 [74/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:45.687 [75/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:45.687 [76/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:45.687 [77/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:45.687 [78/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:45.687 [79/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:45.687 [80/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:45.687 [81/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:45.687 [82/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:45.687 [83/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:45.687 [84/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:45.687 [85/378] Linking static target lib/librte_meter.a 00:01:45.687 [86/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:45.687 [87/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:45.687 [88/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:45.687 [89/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:45.687 [90/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:45.687 [91/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:45.687 [92/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:45.687 [93/378] Linking static target lib/librte_ring.a 00:01:45.687 [94/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:45.687 [95/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:45.687 [96/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:45.687 [97/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:45.687 [98/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:45.687 [99/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:45.687 [100/378] Linking static target lib/librte_telemetry.a 00:01:45.687 [101/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:45.687 [102/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:45.687 [103/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:45.687 [104/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:45.687 [105/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:45.687 [106/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:45.687 [107/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:45.687 [108/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:45.687 [109/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:45.687 [110/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:01:45.687 [111/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:45.687 [112/378] Linking static target lib/librte_cmdline.a 00:01:45.687 [113/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:01:45.687 [114/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:45.687 [115/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:45.687 [116/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:45.687 [117/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:45.687 [118/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:45.687 [119/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:45.687 [120/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:45.687 [121/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:45.946 [122/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:45.946 [123/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:45.946 [124/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:45.946 [125/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:45.946 [126/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:45.946 [127/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:45.946 [128/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:45.946 [129/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:45.946 [130/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:45.946 [131/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:45.946 [132/378] Linking static target lib/librte_mempool.a 00:01:45.946 [133/378] Linking static target lib/librte_timer.a 00:01:45.946 [134/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:45.946 [135/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:45.946 [136/378] Linking static target lib/librte_net.a 00:01:45.946 [137/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:45.946 [138/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:45.946 [139/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:45.946 [140/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:45.946 [141/378] Linking static target lib/librte_rcu.a 00:01:45.946 [142/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:45.946 [143/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:45.946 [144/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:45.946 [145/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:45.946 [146/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:45.946 [147/378] Linking static target lib/librte_eal.a 00:01:45.946 [148/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:45.946 [149/378] Linking static target lib/librte_dmadev.a 00:01:45.946 [150/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:45.946 [151/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:45.946 [152/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:45.946 [153/378] Linking static target lib/librte_compressdev.a 00:01:45.946 [154/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:45.946 [155/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:45.946 [156/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:01:46.203 [157/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.203 [158/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:46.203 [159/378] Linking static target lib/librte_mbuf.a 00:01:46.203 [160/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.203 [161/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:46.203 [162/378] Linking target lib/librte_log.so.24.1 00:01:46.203 [163/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:46.203 [164/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:46.203 [165/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.203 [166/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:46.203 [167/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.203 [168/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:01:46.203 [169/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:46.203 [170/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:01:46.203 [171/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:01:46.203 [172/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:46.203 [173/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.203 [174/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:46.203 [175/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.203 [176/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:01:46.203 [177/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:46.203 [178/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:46.203 [179/378] Linking static target lib/librte_hash.a 00:01:46.203 [180/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:46.462 [181/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:01:46.462 [182/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:01:46.462 [183/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:46.462 [184/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:46.462 [185/378] Linking static target lib/librte_power.a 00:01:46.462 [186/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:01:46.462 [187/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.462 [188/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:01:46.462 [189/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:46.462 [190/378] Linking target lib/librte_kvargs.so.24.1 00:01:46.462 [191/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:46.462 [192/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:01:46.462 [193/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:01:46.462 [194/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.462 [195/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:46.462 [196/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:01:46.462 [197/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:46.462 [198/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:46.462 [199/378] Linking static target lib/librte_reorder.a 00:01:46.462 [200/378] Linking target lib/librte_telemetry.so.24.1 00:01:46.462 [201/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:46.462 [202/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:01:46.462 [203/378] Linking static target lib/librte_cryptodev.a 00:01:46.462 [204/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:46.462 [205/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:46.462 [206/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:46.462 [207/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:01:46.462 [208/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:01:46.462 [209/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:01:46.462 [210/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:01:46.462 [211/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:01:46.462 [212/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:01:46.462 [213/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:46.462 [214/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:01:46.462 [215/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:01:46.462 [216/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:01:46.462 [217/378] Linking static target lib/librte_security.a 00:01:46.462 [218/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:46.462 [219/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:01:46.462 [220/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:01:46.462 [221/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:01:46.462 [222/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:01:46.462 [223/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:01:46.462 [224/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:46.462 [225/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:01:46.462 [226/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:01:46.462 [227/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:01:46.462 [228/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:46.462 [229/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:01:46.462 [230/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:01:46.462 [231/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:46.462 [232/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:46.462 [233/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:01:46.462 [234/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:01:46.462 [235/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:01:46.462 [236/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:01:46.462 [237/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:01:46.462 [238/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:46.462 [239/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:01:46.462 [240/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:01:46.462 [241/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.462 [242/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.462 [243/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:46.462 [244/378] Linking static target drivers/librte_bus_vdev.a 00:01:46.462 [245/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:01:46.462 [246/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:01:46.462 [247/378] Linking static target drivers/librte_bus_auxiliary.a 00:01:46.462 [248/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.720 [249/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:01:46.720 [250/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:01:46.720 [251/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:01:46.720 [252/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:46.720 [253/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:01:46.720 [254/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:46.720 [255/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.720 [256/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:46.720 [257/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.720 [258/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:01:46.720 [259/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.720 [260/378] Linking static target drivers/librte_bus_pci.a 00:01:46.720 [261/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:01:46.720 [262/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:46.720 [263/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:01:46.720 [264/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:01:46.720 [265/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:01:46.720 [266/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:01:46.720 [267/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:01:46.720 [268/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:01:46.720 [269/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:01:46.720 [270/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.720 [271/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:01:46.720 [272/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:01:46.720 [273/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:01:46.720 [274/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.720 [275/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:46.720 [276/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:46.720 [277/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:01:46.720 [278/378] Linking static target drivers/librte_compress_isal.a 00:01:46.720 [279/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.720 [280/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:01:46.720 [281/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:01:46.978 [282/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:46.978 [283/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.978 [284/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.978 [285/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:01:46.978 [286/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:01:46.978 [287/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.978 [288/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.978 [289/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:01:46.978 [290/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:46.978 [291/378] Linking static target drivers/librte_mempool_ring.a 00:01:46.978 [292/378] Linking static target drivers/librte_crypto_mlx5.a 00:01:46.978 [293/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:01:46.978 [294/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:46.978 [295/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.978 [296/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:46.978 [297/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:01:46.978 [298/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:46.978 [299/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:46.978 [300/378] Linking static target drivers/librte_compress_mlx5.a 00:01:46.978 [301/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.978 [302/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:01:46.978 [303/378] Linking static target lib/librte_ethdev.a 00:01:46.978 [304/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:01:46.978 [305/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.236 [306/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:01:47.236 [307/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:47.236 [308/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:47.236 [309/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:01:47.236 [310/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:01:47.236 [311/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.236 [312/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:47.236 [313/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.236 [314/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:47.236 [315/378] Linking static target drivers/librte_common_mlx5.a 00:01:47.236 [316/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:47.495 [317/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.495 [318/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:01:47.495 [319/378] Linking static target drivers/libtmp_rte_common_qat.a 00:01:47.752 [320/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:01:47.752 [321/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:47.752 [322/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:48.010 [323/378] Linking static target drivers/librte_common_qat.a 00:01:48.269 [324/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:48.269 [325/378] Linking static target lib/librte_vhost.a 00:01:48.527 [326/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.429 [327/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.713 [328/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.243 [329/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.805 [330/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.081 [331/378] Linking target lib/librte_eal.so.24.1 00:01:59.081 [332/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:59.081 [333/378] Linking target lib/librte_meter.so.24.1 00:01:59.081 [334/378] Linking target lib/librte_ring.so.24.1 00:01:59.081 [335/378] Linking target lib/librte_timer.so.24.1 00:01:59.081 [336/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:01:59.081 [337/378] Linking target lib/librte_pci.so.24.1 00:01:59.081 [338/378] Linking target lib/librte_dmadev.so.24.1 00:01:59.081 [339/378] Linking target drivers/librte_bus_vdev.so.24.1 00:01:59.342 [340/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:59.342 [341/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:59.342 [342/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:59.342 [343/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:59.342 [344/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:59.342 [345/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:01:59.342 [346/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:01:59.342 [347/378] Linking target lib/librte_rcu.so.24.1 00:01:59.342 [348/378] Linking target lib/librte_mempool.so.24.1 00:01:59.342 [349/378] Linking target drivers/librte_bus_pci.so.24.1 00:01:59.342 [350/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:01:59.342 [351/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:59.342 [352/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:59.600 [353/378] Linking target drivers/librte_mempool_ring.so.24.1 00:01:59.600 [354/378] Linking target lib/librte_mbuf.so.24.1 00:01:59.600 [355/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:59.600 [356/378] Linking target lib/librte_cryptodev.so.24.1 00:01:59.600 [357/378] Linking target lib/librte_reorder.so.24.1 00:01:59.600 [358/378] Linking target lib/librte_net.so.24.1 00:01:59.600 [359/378] Linking target lib/librte_compressdev.so.24.1 00:01:59.858 [360/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:59.858 [361/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:59.858 [362/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:01:59.858 [363/378] Linking target lib/librte_hash.so.24.1 00:01:59.858 [364/378] Linking target lib/librte_security.so.24.1 00:01:59.858 [365/378] Linking target lib/librte_cmdline.so.24.1 00:01:59.858 [366/378] Linking target drivers/librte_compress_isal.so.24.1 00:01:59.858 [367/378] Linking target lib/librte_ethdev.so.24.1 00:01:59.858 [368/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:00.118 [369/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:02:00.118 [370/378] Linking target drivers/librte_common_mlx5.so.24.1 00:02:00.118 [371/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:00.118 [372/378] Linking target lib/librte_power.so.24.1 00:02:00.118 [373/378] Linking target lib/librte_vhost.so.24.1 00:02:00.118 [374/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:02:00.118 [375/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:02:00.376 [376/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:02:00.376 [377/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:02:00.376 [378/378] Linking target drivers/librte_common_qat.so.24.1 00:02:00.376 INFO: autodetecting backend as ninja 00:02:00.376 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:01.315 CC lib/ut/ut.o 00:02:01.315 CC lib/log/log.o 00:02:01.315 CC lib/log/log_flags.o 00:02:01.315 CC lib/log/log_deprecated.o 00:02:01.315 CC lib/ut_mock/mock.o 00:02:01.573 LIB libspdk_ut.a 00:02:01.573 LIB libspdk_log.a 00:02:01.573 SO libspdk_ut.so.2.0 00:02:01.573 LIB libspdk_ut_mock.a 00:02:01.573 SO libspdk_log.so.7.0 00:02:01.573 SO libspdk_ut_mock.so.6.0 00:02:01.573 SYMLINK libspdk_ut.so 00:02:01.573 SYMLINK libspdk_log.so 00:02:01.573 SYMLINK libspdk_ut_mock.so 00:02:01.833 CC lib/util/bit_array.o 00:02:01.833 CC lib/util/base64.o 00:02:01.833 CC lib/util/cpuset.o 00:02:01.833 CC lib/util/crc16.o 00:02:01.833 CC lib/util/crc32.o 00:02:01.833 CC lib/util/crc32c.o 00:02:01.833 CC lib/util/crc32_ieee.o 00:02:01.833 CC lib/util/crc64.o 00:02:01.833 CC lib/util/dif.o 00:02:01.833 CC lib/util/fd.o 00:02:01.833 CC lib/util/fd_group.o 00:02:01.833 CC lib/util/file.o 00:02:01.833 CC lib/util/hexlify.o 00:02:01.833 CC lib/util/iov.o 00:02:01.833 CC lib/util/math.o 00:02:02.092 CC lib/util/pipe.o 00:02:02.092 CC lib/util/net.o 00:02:02.092 CC lib/util/uuid.o 00:02:02.092 CC lib/util/string.o 00:02:02.092 CC lib/util/strerror_tls.o 00:02:02.092 CC lib/util/zipf.o 00:02:02.092 CC lib/util/xor.o 00:02:02.092 CC lib/dma/dma.o 00:02:02.092 CXX lib/trace_parser/trace.o 00:02:02.092 CC lib/ioat/ioat.o 00:02:02.092 CC lib/vfio_user/host/vfio_user.o 00:02:02.092 CC lib/vfio_user/host/vfio_user_pci.o 00:02:02.092 LIB libspdk_dma.a 00:02:02.351 SO libspdk_dma.so.4.0 00:02:02.351 LIB libspdk_ioat.a 00:02:02.351 SO libspdk_ioat.so.7.0 00:02:02.351 SYMLINK libspdk_dma.so 00:02:02.351 LIB libspdk_util.a 00:02:02.351 LIB libspdk_vfio_user.a 00:02:02.351 SYMLINK libspdk_ioat.so 00:02:02.351 SO libspdk_vfio_user.so.5.0 00:02:02.351 SO libspdk_util.so.10.0 00:02:02.351 SYMLINK libspdk_vfio_user.so 00:02:02.609 SYMLINK libspdk_util.so 00:02:02.609 LIB libspdk_trace_parser.a 00:02:02.609 SO libspdk_trace_parser.so.5.0 00:02:02.867 SYMLINK libspdk_trace_parser.so 00:02:02.867 CC lib/conf/conf.o 00:02:02.867 CC lib/json/json_parse.o 00:02:02.867 CC lib/json/json_util.o 00:02:02.867 CC lib/env_dpdk/env.o 00:02:02.867 CC lib/env_dpdk/memory.o 00:02:02.867 CC lib/env_dpdk/pci.o 00:02:02.867 CC lib/json/json_write.o 00:02:02.867 CC lib/env_dpdk/init.o 00:02:02.867 CC lib/env_dpdk/pci_ioat.o 00:02:02.867 CC lib/env_dpdk/threads.o 00:02:02.867 CC lib/env_dpdk/pci_virtio.o 00:02:02.867 CC lib/env_dpdk/pci_idxd.o 00:02:02.867 CC lib/env_dpdk/pci_vmd.o 00:02:02.867 CC lib/reduce/reduce.o 00:02:02.867 CC lib/env_dpdk/pci_event.o 00:02:02.867 CC lib/rdma_utils/rdma_utils.o 00:02:02.867 CC lib/env_dpdk/sigbus_handler.o 00:02:02.867 CC lib/env_dpdk/pci_dpdk.o 00:02:02.867 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:02.867 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:02.867 CC lib/vmd/vmd.o 00:02:02.867 CC lib/vmd/led.o 00:02:02.867 CC lib/idxd/idxd.o 00:02:02.867 CC lib/idxd/idxd_kernel.o 00:02:02.867 CC lib/rdma_provider/common.o 00:02:02.867 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:02.867 CC lib/idxd/idxd_user.o 00:02:03.124 LIB libspdk_conf.a 00:02:03.124 LIB libspdk_rdma_provider.a 00:02:03.124 SO libspdk_conf.so.6.0 00:02:03.124 SO libspdk_rdma_provider.so.6.0 00:02:03.124 LIB libspdk_json.a 00:02:03.124 LIB libspdk_rdma_utils.a 00:02:03.124 SYMLINK libspdk_conf.so 00:02:03.124 SO libspdk_rdma_utils.so.1.0 00:02:03.124 SYMLINK libspdk_rdma_provider.so 00:02:03.124 SO libspdk_json.so.6.0 00:02:03.382 SYMLINK libspdk_json.so 00:02:03.382 SYMLINK libspdk_rdma_utils.so 00:02:03.382 LIB libspdk_idxd.a 00:02:03.382 LIB libspdk_vmd.a 00:02:03.382 SO libspdk_idxd.so.12.0 00:02:03.382 LIB libspdk_reduce.a 00:02:03.382 SO libspdk_vmd.so.6.0 00:02:03.382 SO libspdk_reduce.so.6.1 00:02:03.639 SYMLINK libspdk_idxd.so 00:02:03.639 SYMLINK libspdk_reduce.so 00:02:03.639 SYMLINK libspdk_vmd.so 00:02:03.639 CC lib/jsonrpc/jsonrpc_server.o 00:02:03.639 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:03.639 CC lib/jsonrpc/jsonrpc_client.o 00:02:03.639 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:03.897 LIB libspdk_jsonrpc.a 00:02:03.897 SO libspdk_jsonrpc.so.6.0 00:02:03.897 LIB libspdk_env_dpdk.a 00:02:03.897 SYMLINK libspdk_jsonrpc.so 00:02:03.897 SO libspdk_env_dpdk.so.15.0 00:02:04.155 SYMLINK libspdk_env_dpdk.so 00:02:04.413 CC lib/rpc/rpc.o 00:02:04.413 LIB libspdk_rpc.a 00:02:04.413 SO libspdk_rpc.so.6.0 00:02:04.670 SYMLINK libspdk_rpc.so 00:02:04.931 CC lib/trace/trace.o 00:02:04.931 CC lib/trace/trace_flags.o 00:02:04.931 CC lib/trace/trace_rpc.o 00:02:04.931 CC lib/keyring/keyring.o 00:02:04.931 CC lib/keyring/keyring_rpc.o 00:02:04.931 CC lib/notify/notify.o 00:02:04.931 CC lib/notify/notify_rpc.o 00:02:05.190 LIB libspdk_notify.a 00:02:05.190 LIB libspdk_trace.a 00:02:05.190 LIB libspdk_keyring.a 00:02:05.190 SO libspdk_notify.so.6.0 00:02:05.190 SO libspdk_trace.so.10.0 00:02:05.190 SO libspdk_keyring.so.1.0 00:02:05.190 SYMLINK libspdk_notify.so 00:02:05.190 SYMLINK libspdk_trace.so 00:02:05.190 SYMLINK libspdk_keyring.so 00:02:05.755 CC lib/sock/sock.o 00:02:05.755 CC lib/sock/sock_rpc.o 00:02:05.755 CC lib/thread/thread.o 00:02:05.755 CC lib/thread/iobuf.o 00:02:06.013 LIB libspdk_sock.a 00:02:06.013 SO libspdk_sock.so.10.0 00:02:06.013 SYMLINK libspdk_sock.so 00:02:06.272 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:06.272 CC lib/nvme/nvme_ctrlr.o 00:02:06.272 CC lib/nvme/nvme_ns_cmd.o 00:02:06.272 CC lib/nvme/nvme_fabric.o 00:02:06.272 CC lib/nvme/nvme_ns.o 00:02:06.272 CC lib/nvme/nvme_pcie_common.o 00:02:06.272 CC lib/nvme/nvme_pcie.o 00:02:06.272 CC lib/nvme/nvme_qpair.o 00:02:06.272 CC lib/nvme/nvme.o 00:02:06.272 CC lib/nvme/nvme_quirks.o 00:02:06.272 CC lib/nvme/nvme_transport.o 00:02:06.272 CC lib/nvme/nvme_discovery.o 00:02:06.272 CC lib/nvme/nvme_opal.o 00:02:06.272 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:06.272 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:06.272 CC lib/nvme/nvme_tcp.o 00:02:06.272 CC lib/nvme/nvme_io_msg.o 00:02:06.272 CC lib/nvme/nvme_poll_group.o 00:02:06.272 CC lib/nvme/nvme_zns.o 00:02:06.272 CC lib/nvme/nvme_stubs.o 00:02:06.272 CC lib/nvme/nvme_auth.o 00:02:06.272 CC lib/nvme/nvme_cuse.o 00:02:06.272 CC lib/nvme/nvme_rdma.o 00:02:06.530 LIB libspdk_thread.a 00:02:06.787 SO libspdk_thread.so.10.1 00:02:06.787 SYMLINK libspdk_thread.so 00:02:07.046 CC lib/blob/zeroes.o 00:02:07.046 CC lib/blob/blobstore.o 00:02:07.046 CC lib/blob/request.o 00:02:07.046 CC lib/blob/blob_bs_dev.o 00:02:07.046 CC lib/accel/accel_rpc.o 00:02:07.046 CC lib/accel/accel.o 00:02:07.046 CC lib/accel/accel_sw.o 00:02:07.046 CC lib/init/json_config.o 00:02:07.046 CC lib/virtio/virtio.o 00:02:07.046 CC lib/init/subsystem.o 00:02:07.046 CC lib/init/subsystem_rpc.o 00:02:07.046 CC lib/virtio/virtio_vhost_user.o 00:02:07.046 CC lib/init/rpc.o 00:02:07.046 CC lib/virtio/virtio_vfio_user.o 00:02:07.046 CC lib/virtio/virtio_pci.o 00:02:07.304 LIB libspdk_init.a 00:02:07.304 SO libspdk_init.so.5.0 00:02:07.304 LIB libspdk_virtio.a 00:02:07.304 SO libspdk_virtio.so.7.0 00:02:07.304 SYMLINK libspdk_init.so 00:02:07.562 SYMLINK libspdk_virtio.so 00:02:07.820 CC lib/event/app.o 00:02:07.820 CC lib/event/reactor.o 00:02:07.820 CC lib/event/log_rpc.o 00:02:07.820 CC lib/event/app_rpc.o 00:02:07.820 CC lib/event/scheduler_static.o 00:02:07.820 LIB libspdk_accel.a 00:02:07.820 SO libspdk_accel.so.16.0 00:02:07.820 LIB libspdk_nvme.a 00:02:07.820 SYMLINK libspdk_accel.so 00:02:08.078 SO libspdk_nvme.so.13.1 00:02:08.078 LIB libspdk_event.a 00:02:08.078 SO libspdk_event.so.14.0 00:02:08.078 SYMLINK libspdk_event.so 00:02:08.336 SYMLINK libspdk_nvme.so 00:02:08.336 CC lib/bdev/bdev.o 00:02:08.336 CC lib/bdev/bdev_rpc.o 00:02:08.336 CC lib/bdev/bdev_zone.o 00:02:08.336 CC lib/bdev/part.o 00:02:08.336 CC lib/bdev/scsi_nvme.o 00:02:09.271 LIB libspdk_blob.a 00:02:09.271 SO libspdk_blob.so.11.0 00:02:09.271 SYMLINK libspdk_blob.so 00:02:09.529 CC lib/blobfs/blobfs.o 00:02:09.529 CC lib/blobfs/tree.o 00:02:09.529 CC lib/lvol/lvol.o 00:02:10.094 LIB libspdk_bdev.a 00:02:10.094 SO libspdk_bdev.so.16.0 00:02:10.094 LIB libspdk_blobfs.a 00:02:10.095 SYMLINK libspdk_bdev.so 00:02:10.095 SO libspdk_blobfs.so.10.0 00:02:10.353 LIB libspdk_lvol.a 00:02:10.353 SYMLINK libspdk_blobfs.so 00:02:10.353 SO libspdk_lvol.so.10.0 00:02:10.353 SYMLINK libspdk_lvol.so 00:02:10.612 CC lib/nvmf/subsystem.o 00:02:10.612 CC lib/nvmf/ctrlr.o 00:02:10.612 CC lib/nvmf/ctrlr_discovery.o 00:02:10.612 CC lib/nvmf/ctrlr_bdev.o 00:02:10.612 CC lib/nvmf/nvmf.o 00:02:10.612 CC lib/nvmf/transport.o 00:02:10.612 CC lib/nvmf/nvmf_rpc.o 00:02:10.612 CC lib/nvmf/stubs.o 00:02:10.612 CC lib/nvmf/tcp.o 00:02:10.612 CC lib/nvmf/mdns_server.o 00:02:10.612 CC lib/nvmf/rdma.o 00:02:10.612 CC lib/nvmf/auth.o 00:02:10.612 CC lib/nbd/nbd.o 00:02:10.612 CC lib/nbd/nbd_rpc.o 00:02:10.612 CC lib/ublk/ublk_rpc.o 00:02:10.612 CC lib/ublk/ublk.o 00:02:10.612 CC lib/scsi/dev.o 00:02:10.612 CC lib/scsi/lun.o 00:02:10.612 CC lib/scsi/port.o 00:02:10.612 CC lib/ftl/ftl_core.o 00:02:10.612 CC lib/scsi/scsi.o 00:02:10.612 CC lib/ftl/ftl_init.o 00:02:10.612 CC lib/scsi/scsi_bdev.o 00:02:10.612 CC lib/ftl/ftl_layout.o 00:02:10.612 CC lib/scsi/scsi_pr.o 00:02:10.612 CC lib/ftl/ftl_sb.o 00:02:10.612 CC lib/ftl/ftl_debug.o 00:02:10.612 CC lib/scsi/scsi_rpc.o 00:02:10.612 CC lib/ftl/ftl_io.o 00:02:10.612 CC lib/scsi/task.o 00:02:10.612 CC lib/ftl/ftl_l2p.o 00:02:10.612 CC lib/ftl/ftl_l2p_flat.o 00:02:10.612 CC lib/ftl/ftl_nv_cache.o 00:02:10.612 CC lib/ftl/ftl_band.o 00:02:10.612 CC lib/ftl/ftl_band_ops.o 00:02:10.612 CC lib/ftl/ftl_writer.o 00:02:10.612 CC lib/ftl/ftl_rq.o 00:02:10.612 CC lib/ftl/ftl_reloc.o 00:02:10.612 CC lib/ftl/ftl_l2p_cache.o 00:02:10.612 CC lib/ftl/ftl_p2l.o 00:02:10.612 CC lib/ftl/mngt/ftl_mngt.o 00:02:10.612 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:10.612 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:10.612 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:10.612 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:10.612 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:10.612 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:10.612 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:10.612 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:10.612 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:10.612 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:10.612 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:10.612 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:10.612 CC lib/ftl/utils/ftl_conf.o 00:02:10.612 CC lib/ftl/utils/ftl_mempool.o 00:02:10.612 CC lib/ftl/utils/ftl_md.o 00:02:10.612 CC lib/ftl/utils/ftl_bitmap.o 00:02:10.612 CC lib/ftl/utils/ftl_property.o 00:02:10.612 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:10.612 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:10.612 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:10.612 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:10.612 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:10.612 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:10.612 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:10.612 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:10.612 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:10.612 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:10.612 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:10.612 CC lib/ftl/base/ftl_base_dev.o 00:02:10.612 CC lib/ftl/base/ftl_base_bdev.o 00:02:10.612 CC lib/ftl/ftl_trace.o 00:02:11.179 LIB libspdk_nbd.a 00:02:11.179 SO libspdk_nbd.so.7.0 00:02:11.179 LIB libspdk_scsi.a 00:02:11.179 SYMLINK libspdk_nbd.so 00:02:11.179 SO libspdk_scsi.so.9.0 00:02:11.179 LIB libspdk_ublk.a 00:02:11.179 SO libspdk_ublk.so.3.0 00:02:11.179 SYMLINK libspdk_scsi.so 00:02:11.179 SYMLINK libspdk_ublk.so 00:02:11.437 LIB libspdk_ftl.a 00:02:11.696 CC lib/iscsi/conn.o 00:02:11.696 CC lib/iscsi/init_grp.o 00:02:11.696 CC lib/iscsi/iscsi.o 00:02:11.696 CC lib/iscsi/md5.o 00:02:11.696 CC lib/iscsi/portal_grp.o 00:02:11.696 CC lib/iscsi/param.o 00:02:11.696 CC lib/iscsi/tgt_node.o 00:02:11.696 CC lib/iscsi/iscsi_subsystem.o 00:02:11.696 CC lib/iscsi/iscsi_rpc.o 00:02:11.696 CC lib/iscsi/task.o 00:02:11.696 CC lib/vhost/vhost.o 00:02:11.696 CC lib/vhost/vhost_rpc.o 00:02:11.696 CC lib/vhost/vhost_scsi.o 00:02:11.696 CC lib/vhost/vhost_blk.o 00:02:11.696 CC lib/vhost/rte_vhost_user.o 00:02:11.696 SO libspdk_ftl.so.9.0 00:02:11.991 SYMLINK libspdk_ftl.so 00:02:12.250 LIB libspdk_nvmf.a 00:02:12.250 SO libspdk_nvmf.so.19.0 00:02:12.509 SYMLINK libspdk_nvmf.so 00:02:12.509 LIB libspdk_vhost.a 00:02:12.509 SO libspdk_vhost.so.8.0 00:02:12.509 LIB libspdk_iscsi.a 00:02:12.509 SYMLINK libspdk_vhost.so 00:02:12.768 SO libspdk_iscsi.so.8.0 00:02:12.768 SYMLINK libspdk_iscsi.so 00:02:13.335 CC module/env_dpdk/env_dpdk_rpc.o 00:02:13.593 LIB libspdk_env_dpdk_rpc.a 00:02:13.593 CC module/accel/ioat/accel_ioat.o 00:02:13.593 CC module/accel/ioat/accel_ioat_rpc.o 00:02:13.593 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:02:13.593 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:02:13.593 CC module/sock/posix/posix.o 00:02:13.593 CC module/accel/iaa/accel_iaa.o 00:02:13.593 CC module/accel/iaa/accel_iaa_rpc.o 00:02:13.593 CC module/keyring/file/keyring_rpc.o 00:02:13.593 CC module/keyring/file/keyring.o 00:02:13.593 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:02:13.593 CC module/accel/error/accel_error.o 00:02:13.594 CC module/accel/error/accel_error_rpc.o 00:02:13.594 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:02:13.594 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:13.594 CC module/keyring/linux/keyring.o 00:02:13.594 CC module/scheduler/gscheduler/gscheduler.o 00:02:13.594 SO libspdk_env_dpdk_rpc.so.6.0 00:02:13.594 CC module/keyring/linux/keyring_rpc.o 00:02:13.594 CC module/blob/bdev/blob_bdev.o 00:02:13.594 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:13.594 CC module/accel/dsa/accel_dsa.o 00:02:13.594 CC module/accel/dsa/accel_dsa_rpc.o 00:02:13.594 SYMLINK libspdk_env_dpdk_rpc.so 00:02:13.594 LIB libspdk_accel_ioat.a 00:02:13.594 LIB libspdk_keyring_file.a 00:02:13.594 LIB libspdk_scheduler_gscheduler.a 00:02:13.594 LIB libspdk_keyring_linux.a 00:02:13.594 LIB libspdk_scheduler_dpdk_governor.a 00:02:13.594 SO libspdk_accel_ioat.so.6.0 00:02:13.594 SO libspdk_scheduler_gscheduler.so.4.0 00:02:13.853 LIB libspdk_accel_error.a 00:02:13.853 SO libspdk_keyring_linux.so.1.0 00:02:13.853 LIB libspdk_scheduler_dynamic.a 00:02:13.853 SO libspdk_keyring_file.so.1.0 00:02:13.853 LIB libspdk_accel_iaa.a 00:02:13.853 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:13.853 SO libspdk_accel_error.so.2.0 00:02:13.853 SO libspdk_scheduler_dynamic.so.4.0 00:02:13.853 SO libspdk_accel_iaa.so.3.0 00:02:13.853 SYMLINK libspdk_scheduler_gscheduler.so 00:02:13.853 SYMLINK libspdk_keyring_file.so 00:02:13.853 SYMLINK libspdk_keyring_linux.so 00:02:13.853 SYMLINK libspdk_accel_ioat.so 00:02:13.853 LIB libspdk_accel_dsa.a 00:02:13.853 LIB libspdk_blob_bdev.a 00:02:13.853 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:13.853 SO libspdk_accel_dsa.so.5.0 00:02:13.853 SYMLINK libspdk_accel_error.so 00:02:13.853 SYMLINK libspdk_accel_iaa.so 00:02:13.853 SO libspdk_blob_bdev.so.11.0 00:02:13.853 SYMLINK libspdk_scheduler_dynamic.so 00:02:13.853 SYMLINK libspdk_accel_dsa.so 00:02:13.853 SYMLINK libspdk_blob_bdev.so 00:02:14.112 LIB libspdk_sock_posix.a 00:02:14.112 SO libspdk_sock_posix.so.6.0 00:02:14.371 SYMLINK libspdk_sock_posix.so 00:02:14.371 LIB libspdk_accel_dpdk_compressdev.a 00:02:14.371 SO libspdk_accel_dpdk_compressdev.so.3.0 00:02:14.371 CC module/bdev/gpt/gpt.o 00:02:14.371 CC module/bdev/gpt/vbdev_gpt.o 00:02:14.371 CC module/bdev/error/vbdev_error.o 00:02:14.371 CC module/bdev/error/vbdev_error_rpc.o 00:02:14.371 CC module/bdev/null/bdev_null_rpc.o 00:02:14.371 CC module/bdev/null/bdev_null.o 00:02:14.371 CC module/blobfs/bdev/blobfs_bdev.o 00:02:14.371 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:14.371 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:14.371 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:14.371 CC module/bdev/passthru/vbdev_passthru.o 00:02:14.371 CC module/bdev/delay/vbdev_delay.o 00:02:14.371 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:14.371 SYMLINK libspdk_accel_dpdk_compressdev.so 00:02:14.371 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:14.371 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:02:14.371 CC module/bdev/malloc/bdev_malloc.o 00:02:14.371 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:14.371 CC module/bdev/crypto/vbdev_crypto.o 00:02:14.371 CC module/bdev/raid/bdev_raid.o 00:02:14.371 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:14.371 CC module/bdev/raid/bdev_raid_rpc.o 00:02:14.371 CC module/bdev/iscsi/bdev_iscsi.o 00:02:14.371 CC module/bdev/raid/bdev_raid_sb.o 00:02:14.371 CC module/bdev/nvme/bdev_nvme.o 00:02:14.371 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:14.371 CC module/bdev/aio/bdev_aio.o 00:02:14.371 CC module/bdev/nvme/nvme_rpc.o 00:02:14.371 CC module/bdev/raid/raid0.o 00:02:14.371 CC module/bdev/aio/bdev_aio_rpc.o 00:02:14.371 CC module/bdev/split/vbdev_split.o 00:02:14.371 CC module/bdev/raid/raid1.o 00:02:14.371 CC module/bdev/split/vbdev_split_rpc.o 00:02:14.371 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:14.371 CC module/bdev/lvol/vbdev_lvol.o 00:02:14.371 CC module/bdev/raid/concat.o 00:02:14.371 CC module/bdev/nvme/bdev_mdns_client.o 00:02:14.371 CC module/bdev/nvme/vbdev_opal.o 00:02:14.371 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:14.371 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:14.371 CC module/bdev/ftl/bdev_ftl.o 00:02:14.371 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:14.371 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:14.371 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:14.371 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:14.371 CC module/bdev/compress/vbdev_compress.o 00:02:14.371 CC module/bdev/compress/vbdev_compress_rpc.o 00:02:14.630 LIB libspdk_accel_dpdk_cryptodev.a 00:02:14.630 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:02:14.630 LIB libspdk_blobfs_bdev.a 00:02:14.630 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:02:14.630 SO libspdk_blobfs_bdev.so.6.0 00:02:14.630 LIB libspdk_bdev_null.a 00:02:14.630 LIB libspdk_bdev_error.a 00:02:14.630 LIB libspdk_bdev_gpt.a 00:02:14.630 SO libspdk_bdev_null.so.6.0 00:02:14.630 LIB libspdk_bdev_split.a 00:02:14.630 SO libspdk_bdev_gpt.so.6.0 00:02:14.630 SO libspdk_bdev_error.so.6.0 00:02:14.889 LIB libspdk_bdev_passthru.a 00:02:14.889 LIB libspdk_bdev_ftl.a 00:02:14.889 SYMLINK libspdk_blobfs_bdev.so 00:02:14.889 LIB libspdk_bdev_zone_block.a 00:02:14.889 SO libspdk_bdev_passthru.so.6.0 00:02:14.889 LIB libspdk_bdev_aio.a 00:02:14.889 SO libspdk_bdev_split.so.6.0 00:02:14.889 SO libspdk_bdev_ftl.so.6.0 00:02:14.889 LIB libspdk_bdev_delay.a 00:02:14.889 SYMLINK libspdk_bdev_null.so 00:02:14.889 SYMLINK libspdk_bdev_gpt.so 00:02:14.889 LIB libspdk_bdev_crypto.a 00:02:14.889 SO libspdk_bdev_zone_block.so.6.0 00:02:14.889 SYMLINK libspdk_bdev_error.so 00:02:14.889 SO libspdk_bdev_aio.so.6.0 00:02:14.889 SO libspdk_bdev_delay.so.6.0 00:02:14.889 LIB libspdk_bdev_malloc.a 00:02:14.889 LIB libspdk_bdev_iscsi.a 00:02:14.889 SO libspdk_bdev_crypto.so.6.0 00:02:14.889 LIB libspdk_bdev_compress.a 00:02:14.889 SYMLINK libspdk_bdev_split.so 00:02:14.889 SYMLINK libspdk_bdev_passthru.so 00:02:14.889 SYMLINK libspdk_bdev_zone_block.so 00:02:14.889 SYMLINK libspdk_bdev_ftl.so 00:02:14.889 SO libspdk_bdev_iscsi.so.6.0 00:02:14.889 SO libspdk_bdev_malloc.so.6.0 00:02:14.889 SYMLINK libspdk_bdev_delay.so 00:02:14.889 SYMLINK libspdk_bdev_crypto.so 00:02:14.889 SYMLINK libspdk_bdev_aio.so 00:02:14.889 SO libspdk_bdev_compress.so.6.0 00:02:14.889 LIB libspdk_bdev_lvol.a 00:02:14.889 SYMLINK libspdk_bdev_malloc.so 00:02:14.889 SYMLINK libspdk_bdev_iscsi.so 00:02:14.889 LIB libspdk_bdev_virtio.a 00:02:14.889 SYMLINK libspdk_bdev_compress.so 00:02:14.889 SO libspdk_bdev_lvol.so.6.0 00:02:14.889 SO libspdk_bdev_virtio.so.6.0 00:02:15.146 SYMLINK libspdk_bdev_virtio.so 00:02:15.146 SYMLINK libspdk_bdev_lvol.so 00:02:15.146 LIB libspdk_bdev_raid.a 00:02:15.405 SO libspdk_bdev_raid.so.6.0 00:02:15.405 SYMLINK libspdk_bdev_raid.so 00:02:15.973 LIB libspdk_bdev_nvme.a 00:02:16.232 SO libspdk_bdev_nvme.so.7.0 00:02:16.232 SYMLINK libspdk_bdev_nvme.so 00:02:17.172 CC module/event/subsystems/vmd/vmd.o 00:02:17.172 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:17.172 CC module/event/subsystems/iobuf/iobuf.o 00:02:17.172 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:17.172 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:17.172 CC module/event/subsystems/keyring/keyring.o 00:02:17.172 CC module/event/subsystems/scheduler/scheduler.o 00:02:17.172 CC module/event/subsystems/sock/sock.o 00:02:17.172 LIB libspdk_event_vmd.a 00:02:17.172 LIB libspdk_event_keyring.a 00:02:17.172 LIB libspdk_event_vhost_blk.a 00:02:17.172 LIB libspdk_event_iobuf.a 00:02:17.172 LIB libspdk_event_sock.a 00:02:17.172 LIB libspdk_event_scheduler.a 00:02:17.172 SO libspdk_event_keyring.so.1.0 00:02:17.172 SO libspdk_event_vhost_blk.so.3.0 00:02:17.172 SO libspdk_event_vmd.so.6.0 00:02:17.172 SO libspdk_event_iobuf.so.3.0 00:02:17.172 SO libspdk_event_sock.so.5.0 00:02:17.172 SO libspdk_event_scheduler.so.4.0 00:02:17.172 SYMLINK libspdk_event_keyring.so 00:02:17.172 SYMLINK libspdk_event_vhost_blk.so 00:02:17.172 SYMLINK libspdk_event_iobuf.so 00:02:17.172 SYMLINK libspdk_event_scheduler.so 00:02:17.172 SYMLINK libspdk_event_vmd.so 00:02:17.172 SYMLINK libspdk_event_sock.so 00:02:17.742 CC module/event/subsystems/accel/accel.o 00:02:17.742 LIB libspdk_event_accel.a 00:02:17.742 SO libspdk_event_accel.so.6.0 00:02:17.742 SYMLINK libspdk_event_accel.so 00:02:18.311 CC module/event/subsystems/bdev/bdev.o 00:02:18.311 LIB libspdk_event_bdev.a 00:02:18.570 SO libspdk_event_bdev.so.6.0 00:02:18.570 SYMLINK libspdk_event_bdev.so 00:02:18.829 CC module/event/subsystems/scsi/scsi.o 00:02:18.829 CC module/event/subsystems/ublk/ublk.o 00:02:18.829 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:18.829 CC module/event/subsystems/nbd/nbd.o 00:02:18.829 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:19.089 LIB libspdk_event_scsi.a 00:02:19.089 LIB libspdk_event_ublk.a 00:02:19.089 LIB libspdk_event_nbd.a 00:02:19.089 SO libspdk_event_scsi.so.6.0 00:02:19.089 SO libspdk_event_ublk.so.3.0 00:02:19.089 SO libspdk_event_nbd.so.6.0 00:02:19.089 LIB libspdk_event_nvmf.a 00:02:19.089 SYMLINK libspdk_event_scsi.so 00:02:19.089 SYMLINK libspdk_event_ublk.so 00:02:19.089 SO libspdk_event_nvmf.so.6.0 00:02:19.089 SYMLINK libspdk_event_nbd.so 00:02:19.089 SYMLINK libspdk_event_nvmf.so 00:02:19.349 CC module/event/subsystems/iscsi/iscsi.o 00:02:19.349 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:19.608 LIB libspdk_event_iscsi.a 00:02:19.608 LIB libspdk_event_vhost_scsi.a 00:02:19.608 SO libspdk_event_iscsi.so.6.0 00:02:19.608 SO libspdk_event_vhost_scsi.so.3.0 00:02:19.608 SYMLINK libspdk_event_iscsi.so 00:02:19.608 SYMLINK libspdk_event_vhost_scsi.so 00:02:19.868 SO libspdk.so.6.0 00:02:19.868 SYMLINK libspdk.so 00:02:20.440 CC app/spdk_lspci/spdk_lspci.o 00:02:20.440 CC app/trace_record/trace_record.o 00:02:20.440 TEST_HEADER include/spdk/accel.h 00:02:20.440 TEST_HEADER include/spdk/assert.h 00:02:20.440 TEST_HEADER include/spdk/accel_module.h 00:02:20.440 CC app/spdk_nvme_perf/perf.o 00:02:20.440 TEST_HEADER include/spdk/bdev.h 00:02:20.440 CC app/spdk_nvme_identify/identify.o 00:02:20.440 TEST_HEADER include/spdk/base64.h 00:02:20.440 TEST_HEADER include/spdk/bdev_module.h 00:02:20.440 TEST_HEADER include/spdk/bdev_zone.h 00:02:20.440 TEST_HEADER include/spdk/barrier.h 00:02:20.440 CC test/rpc_client/rpc_client_test.o 00:02:20.440 TEST_HEADER include/spdk/bit_array.h 00:02:20.440 TEST_HEADER include/spdk/bit_pool.h 00:02:20.440 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:20.440 TEST_HEADER include/spdk/blob_bdev.h 00:02:20.440 CC app/spdk_nvme_discover/discovery_aer.o 00:02:20.440 TEST_HEADER include/spdk/blobfs.h 00:02:20.440 TEST_HEADER include/spdk/config.h 00:02:20.440 CXX app/trace/trace.o 00:02:20.440 TEST_HEADER include/spdk/blob.h 00:02:20.440 TEST_HEADER include/spdk/conf.h 00:02:20.440 TEST_HEADER include/spdk/cpuset.h 00:02:20.440 TEST_HEADER include/spdk/crc16.h 00:02:20.440 CC app/spdk_top/spdk_top.o 00:02:20.440 TEST_HEADER include/spdk/crc32.h 00:02:20.440 TEST_HEADER include/spdk/crc64.h 00:02:20.440 TEST_HEADER include/spdk/dma.h 00:02:20.440 TEST_HEADER include/spdk/dif.h 00:02:20.440 TEST_HEADER include/spdk/endian.h 00:02:20.440 TEST_HEADER include/spdk/event.h 00:02:20.440 TEST_HEADER include/spdk/env_dpdk.h 00:02:20.440 TEST_HEADER include/spdk/env.h 00:02:20.440 TEST_HEADER include/spdk/fd_group.h 00:02:20.440 TEST_HEADER include/spdk/fd.h 00:02:20.440 TEST_HEADER include/spdk/ftl.h 00:02:20.440 TEST_HEADER include/spdk/gpt_spec.h 00:02:20.440 TEST_HEADER include/spdk/file.h 00:02:20.440 TEST_HEADER include/spdk/hexlify.h 00:02:20.440 TEST_HEADER include/spdk/histogram_data.h 00:02:20.440 CC app/nvmf_tgt/nvmf_main.o 00:02:20.440 TEST_HEADER include/spdk/idxd.h 00:02:20.440 TEST_HEADER include/spdk/idxd_spec.h 00:02:20.440 TEST_HEADER include/spdk/init.h 00:02:20.440 TEST_HEADER include/spdk/ioat.h 00:02:20.440 TEST_HEADER include/spdk/ioat_spec.h 00:02:20.440 CC app/spdk_dd/spdk_dd.o 00:02:20.440 TEST_HEADER include/spdk/iscsi_spec.h 00:02:20.440 TEST_HEADER include/spdk/json.h 00:02:20.440 TEST_HEADER include/spdk/keyring.h 00:02:20.440 TEST_HEADER include/spdk/jsonrpc.h 00:02:20.440 TEST_HEADER include/spdk/keyring_module.h 00:02:20.440 TEST_HEADER include/spdk/log.h 00:02:20.440 TEST_HEADER include/spdk/likely.h 00:02:20.440 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:20.440 TEST_HEADER include/spdk/lvol.h 00:02:20.440 TEST_HEADER include/spdk/memory.h 00:02:20.440 TEST_HEADER include/spdk/nbd.h 00:02:20.440 TEST_HEADER include/spdk/mmio.h 00:02:20.440 TEST_HEADER include/spdk/notify.h 00:02:20.440 TEST_HEADER include/spdk/net.h 00:02:20.440 CC app/iscsi_tgt/iscsi_tgt.o 00:02:20.440 TEST_HEADER include/spdk/nvme.h 00:02:20.440 TEST_HEADER include/spdk/nvme_intel.h 00:02:20.440 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:20.440 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:20.440 TEST_HEADER include/spdk/nvme_spec.h 00:02:20.440 TEST_HEADER include/spdk/nvme_zns.h 00:02:20.440 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:20.440 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:20.440 TEST_HEADER include/spdk/nvmf_transport.h 00:02:20.440 TEST_HEADER include/spdk/nvmf.h 00:02:20.440 TEST_HEADER include/spdk/nvmf_spec.h 00:02:20.440 TEST_HEADER include/spdk/opal_spec.h 00:02:20.440 TEST_HEADER include/spdk/opal.h 00:02:20.440 TEST_HEADER include/spdk/pci_ids.h 00:02:20.440 TEST_HEADER include/spdk/pipe.h 00:02:20.440 TEST_HEADER include/spdk/queue.h 00:02:20.440 TEST_HEADER include/spdk/scheduler.h 00:02:20.440 TEST_HEADER include/spdk/reduce.h 00:02:20.440 TEST_HEADER include/spdk/scsi.h 00:02:20.440 TEST_HEADER include/spdk/rpc.h 00:02:20.440 TEST_HEADER include/spdk/scsi_spec.h 00:02:20.440 TEST_HEADER include/spdk/sock.h 00:02:20.440 TEST_HEADER include/spdk/stdinc.h 00:02:20.440 TEST_HEADER include/spdk/string.h 00:02:20.440 TEST_HEADER include/spdk/thread.h 00:02:20.440 TEST_HEADER include/spdk/trace.h 00:02:20.440 TEST_HEADER include/spdk/tree.h 00:02:20.440 TEST_HEADER include/spdk/ublk.h 00:02:20.440 TEST_HEADER include/spdk/trace_parser.h 00:02:20.440 CC app/spdk_tgt/spdk_tgt.o 00:02:20.440 TEST_HEADER include/spdk/util.h 00:02:20.440 TEST_HEADER include/spdk/uuid.h 00:02:20.440 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:20.440 TEST_HEADER include/spdk/version.h 00:02:20.440 TEST_HEADER include/spdk/vhost.h 00:02:20.440 TEST_HEADER include/spdk/vmd.h 00:02:20.440 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:20.440 TEST_HEADER include/spdk/zipf.h 00:02:20.440 TEST_HEADER include/spdk/xor.h 00:02:20.440 CXX test/cpp_headers/accel.o 00:02:20.440 CXX test/cpp_headers/assert.o 00:02:20.440 CXX test/cpp_headers/accel_module.o 00:02:20.440 CXX test/cpp_headers/barrier.o 00:02:20.440 CXX test/cpp_headers/base64.o 00:02:20.440 CXX test/cpp_headers/bdev_module.o 00:02:20.441 CXX test/cpp_headers/bdev.o 00:02:20.441 CXX test/cpp_headers/bit_array.o 00:02:20.441 CXX test/cpp_headers/bdev_zone.o 00:02:20.441 CXX test/cpp_headers/bit_pool.o 00:02:20.441 CXX test/cpp_headers/blob_bdev.o 00:02:20.441 CXX test/cpp_headers/blob.o 00:02:20.441 CXX test/cpp_headers/blobfs_bdev.o 00:02:20.441 CXX test/cpp_headers/blobfs.o 00:02:20.441 CXX test/cpp_headers/conf.o 00:02:20.441 CXX test/cpp_headers/config.o 00:02:20.441 CXX test/cpp_headers/cpuset.o 00:02:20.441 CXX test/cpp_headers/crc16.o 00:02:20.441 CXX test/cpp_headers/crc32.o 00:02:20.441 CXX test/cpp_headers/dif.o 00:02:20.441 CXX test/cpp_headers/crc64.o 00:02:20.441 CXX test/cpp_headers/endian.o 00:02:20.441 CXX test/cpp_headers/dma.o 00:02:20.441 CXX test/cpp_headers/env.o 00:02:20.441 CXX test/cpp_headers/env_dpdk.o 00:02:20.441 CXX test/cpp_headers/event.o 00:02:20.441 CXX test/cpp_headers/fd_group.o 00:02:20.441 CXX test/cpp_headers/fd.o 00:02:20.441 CXX test/cpp_headers/ftl.o 00:02:20.441 CXX test/cpp_headers/file.o 00:02:20.441 CXX test/cpp_headers/gpt_spec.o 00:02:20.441 CXX test/cpp_headers/histogram_data.o 00:02:20.441 CXX test/cpp_headers/hexlify.o 00:02:20.441 CXX test/cpp_headers/init.o 00:02:20.441 CXX test/cpp_headers/idxd.o 00:02:20.441 CXX test/cpp_headers/idxd_spec.o 00:02:20.441 CXX test/cpp_headers/ioat.o 00:02:20.441 CXX test/cpp_headers/iscsi_spec.o 00:02:20.441 CXX test/cpp_headers/ioat_spec.o 00:02:20.441 CXX test/cpp_headers/json.o 00:02:20.441 CXX test/cpp_headers/jsonrpc.o 00:02:20.441 CXX test/cpp_headers/keyring.o 00:02:20.441 CXX test/cpp_headers/keyring_module.o 00:02:20.441 CXX test/cpp_headers/likely.o 00:02:20.441 CXX test/cpp_headers/log.o 00:02:20.441 CXX test/cpp_headers/lvol.o 00:02:20.441 CXX test/cpp_headers/mmio.o 00:02:20.441 CXX test/cpp_headers/nbd.o 00:02:20.441 CXX test/cpp_headers/memory.o 00:02:20.441 CXX test/cpp_headers/notify.o 00:02:20.441 CXX test/cpp_headers/net.o 00:02:20.441 CXX test/cpp_headers/nvme.o 00:02:20.441 CXX test/cpp_headers/nvme_intel.o 00:02:20.441 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:20.441 CXX test/cpp_headers/nvme_ocssd.o 00:02:20.441 CXX test/cpp_headers/nvme_spec.o 00:02:20.441 CXX test/cpp_headers/nvme_zns.o 00:02:20.441 CXX test/cpp_headers/nvmf_cmd.o 00:02:20.441 CXX test/cpp_headers/nvmf.o 00:02:20.441 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:20.441 CXX test/cpp_headers/nvmf_spec.o 00:02:20.441 CXX test/cpp_headers/nvmf_transport.o 00:02:20.441 CXX test/cpp_headers/opal.o 00:02:20.441 CXX test/cpp_headers/opal_spec.o 00:02:20.441 CXX test/cpp_headers/pci_ids.o 00:02:20.441 CXX test/cpp_headers/pipe.o 00:02:20.441 CXX test/cpp_headers/queue.o 00:02:20.441 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:20.441 CXX test/cpp_headers/reduce.o 00:02:20.441 CXX test/cpp_headers/rpc.o 00:02:20.441 CXX test/cpp_headers/scheduler.o 00:02:20.441 CXX test/cpp_headers/scsi.o 00:02:20.441 CXX test/cpp_headers/scsi_spec.o 00:02:20.441 CXX test/cpp_headers/sock.o 00:02:20.441 CXX test/cpp_headers/stdinc.o 00:02:20.441 CXX test/cpp_headers/string.o 00:02:20.441 CXX test/cpp_headers/thread.o 00:02:20.441 CXX test/cpp_headers/trace.o 00:02:20.441 CXX test/cpp_headers/trace_parser.o 00:02:20.441 CXX test/cpp_headers/tree.o 00:02:20.441 CXX test/cpp_headers/ublk.o 00:02:20.441 CXX test/cpp_headers/util.o 00:02:20.441 CXX test/cpp_headers/uuid.o 00:02:20.441 CC examples/ioat/verify/verify.o 00:02:20.441 CC test/env/pci/pci_ut.o 00:02:20.441 CC test/env/vtophys/vtophys.o 00:02:20.441 CC test/env/memory/memory_ut.o 00:02:20.441 CXX test/cpp_headers/version.o 00:02:20.441 CC examples/util/zipf/zipf.o 00:02:20.720 CC test/thread/poller_perf/poller_perf.o 00:02:20.720 CC examples/ioat/perf/perf.o 00:02:20.720 LINK spdk_lspci 00:02:20.720 CC test/app/jsoncat/jsoncat.o 00:02:20.720 CC test/app/stub/stub.o 00:02:20.720 CC test/app/histogram_perf/histogram_perf.o 00:02:20.720 CC app/fio/nvme/fio_plugin.o 00:02:20.720 CC test/app/bdev_svc/bdev_svc.o 00:02:20.720 CC test/dma/test_dma/test_dma.o 00:02:20.720 CXX test/cpp_headers/vfio_user_pci.o 00:02:20.720 CXX test/cpp_headers/vfio_user_spec.o 00:02:20.720 CC app/fio/bdev/fio_plugin.o 00:02:20.990 LINK rpc_client_test 00:02:20.990 LINK nvmf_tgt 00:02:20.990 LINK spdk_nvme_discover 00:02:20.990 LINK interrupt_tgt 00:02:20.990 LINK spdk_trace_record 00:02:21.249 CC test/env/mem_callbacks/mem_callbacks.o 00:02:21.249 LINK iscsi_tgt 00:02:21.249 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:21.249 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:21.249 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:21.249 LINK vtophys 00:02:21.249 LINK jsoncat 00:02:21.250 LINK poller_perf 00:02:21.250 LINK env_dpdk_post_init 00:02:21.250 CXX test/cpp_headers/vhost.o 00:02:21.250 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:21.250 CXX test/cpp_headers/vmd.o 00:02:21.250 CXX test/cpp_headers/xor.o 00:02:21.250 CXX test/cpp_headers/zipf.o 00:02:21.250 LINK zipf 00:02:21.250 LINK stub 00:02:21.250 LINK spdk_tgt 00:02:21.250 LINK histogram_perf 00:02:21.250 LINK verify 00:02:21.250 LINK ioat_perf 00:02:21.250 LINK bdev_svc 00:02:21.250 LINK spdk_dd 00:02:21.508 LINK spdk_trace 00:02:21.508 LINK pci_ut 00:02:21.508 LINK test_dma 00:02:21.508 LINK nvme_fuzz 00:02:21.508 LINK spdk_bdev 00:02:21.766 LINK spdk_nvme 00:02:21.766 LINK vhost_fuzz 00:02:21.766 LINK spdk_top 00:02:21.766 LINK mem_callbacks 00:02:21.766 LINK spdk_nvme_identify 00:02:21.766 CC examples/vmd/led/led.o 00:02:21.766 CC examples/idxd/perf/perf.o 00:02:21.766 LINK spdk_nvme_perf 00:02:21.766 CC examples/vmd/lsvmd/lsvmd.o 00:02:21.766 CC test/event/event_perf/event_perf.o 00:02:21.766 CC test/event/reactor_perf/reactor_perf.o 00:02:21.766 CC examples/sock/hello_world/hello_sock.o 00:02:21.766 CC test/event/reactor/reactor.o 00:02:21.766 CC test/event/app_repeat/app_repeat.o 00:02:21.766 CC app/vhost/vhost.o 00:02:21.766 CC test/event/scheduler/scheduler.o 00:02:21.766 CC examples/thread/thread/thread_ex.o 00:02:22.024 LINK led 00:02:22.024 LINK lsvmd 00:02:22.024 LINK reactor 00:02:22.024 LINK event_perf 00:02:22.024 LINK reactor_perf 00:02:22.024 LINK app_repeat 00:02:22.024 LINK hello_sock 00:02:22.024 LINK vhost 00:02:22.024 LINK memory_ut 00:02:22.024 LINK idxd_perf 00:02:22.024 CC test/nvme/overhead/overhead.o 00:02:22.024 CC test/nvme/boot_partition/boot_partition.o 00:02:22.024 CC test/nvme/aer/aer.o 00:02:22.024 LINK scheduler 00:02:22.024 CC test/nvme/fdp/fdp.o 00:02:22.024 CC test/nvme/simple_copy/simple_copy.o 00:02:22.024 CC test/nvme/e2edp/nvme_dp.o 00:02:22.024 CC test/nvme/reset/reset.o 00:02:22.024 CC test/nvme/startup/startup.o 00:02:22.024 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:22.024 CC test/nvme/compliance/nvme_compliance.o 00:02:22.024 CC test/nvme/sgl/sgl.o 00:02:22.024 CC test/nvme/connect_stress/connect_stress.o 00:02:22.024 CC test/nvme/fused_ordering/fused_ordering.o 00:02:22.024 CC test/nvme/reserve/reserve.o 00:02:22.024 CC test/nvme/cuse/cuse.o 00:02:22.024 CC test/nvme/err_injection/err_injection.o 00:02:22.024 CC test/blobfs/mkfs/mkfs.o 00:02:22.024 LINK thread 00:02:22.024 CC test/accel/dif/dif.o 00:02:22.281 LINK boot_partition 00:02:22.281 CC test/lvol/esnap/esnap.o 00:02:22.281 LINK startup 00:02:22.281 LINK fused_ordering 00:02:22.281 LINK doorbell_aers 00:02:22.281 LINK connect_stress 00:02:22.281 LINK err_injection 00:02:22.281 LINK reserve 00:02:22.281 LINK mkfs 00:02:22.281 LINK simple_copy 00:02:22.281 LINK overhead 00:02:22.281 LINK reset 00:02:22.281 LINK nvme_dp 00:02:22.281 LINK aer 00:02:22.281 LINK sgl 00:02:22.281 LINK nvme_compliance 00:02:22.281 LINK fdp 00:02:22.538 LINK iscsi_fuzz 00:02:22.538 LINK dif 00:02:22.538 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:22.538 CC examples/nvme/abort/abort.o 00:02:22.538 CC examples/nvme/arbitration/arbitration.o 00:02:22.538 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:22.538 CC examples/nvme/reconnect/reconnect.o 00:02:22.538 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:22.538 CC examples/nvme/hotplug/hotplug.o 00:02:22.538 CC examples/nvme/hello_world/hello_world.o 00:02:22.796 CC examples/accel/perf/accel_perf.o 00:02:22.796 CC examples/blob/hello_world/hello_blob.o 00:02:22.796 LINK pmr_persistence 00:02:22.796 LINK cmb_copy 00:02:22.796 CC examples/blob/cli/blobcli.o 00:02:22.796 LINK hello_world 00:02:22.796 LINK hotplug 00:02:22.796 LINK reconnect 00:02:22.796 LINK abort 00:02:22.796 LINK arbitration 00:02:22.796 LINK nvme_manage 00:02:23.055 LINK hello_blob 00:02:23.055 LINK accel_perf 00:02:23.055 LINK cuse 00:02:23.055 CC test/bdev/bdevio/bdevio.o 00:02:23.055 LINK blobcli 00:02:23.314 LINK bdevio 00:02:23.572 CC examples/bdev/hello_world/hello_bdev.o 00:02:23.572 CC examples/bdev/bdevperf/bdevperf.o 00:02:23.830 LINK hello_bdev 00:02:24.089 LINK bdevperf 00:02:24.657 CC examples/nvmf/nvmf/nvmf.o 00:02:24.915 LINK nvmf 00:02:25.849 LINK esnap 00:02:25.849 00:02:25.849 real 1m12.845s 00:02:25.849 user 12m55.985s 00:02:25.849 sys 5m12.269s 00:02:25.849 18:05:34 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:25.849 18:05:34 make -- common/autotest_common.sh@10 -- $ set +x 00:02:25.849 ************************************ 00:02:25.849 END TEST make 00:02:25.849 ************************************ 00:02:26.108 18:05:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:26.108 18:05:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:26.108 18:05:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:26.108 18:05:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.108 18:05:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:26.108 18:05:34 -- pm/common@44 -- $ pid=1972485 00:02:26.108 18:05:34 -- pm/common@50 -- $ kill -TERM 1972485 00:02:26.108 18:05:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.108 18:05:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:26.108 18:05:34 -- pm/common@44 -- $ pid=1972487 00:02:26.108 18:05:34 -- pm/common@50 -- $ kill -TERM 1972487 00:02:26.108 18:05:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.108 18:05:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:26.108 18:05:34 -- pm/common@44 -- $ pid=1972489 00:02:26.108 18:05:34 -- pm/common@50 -- $ kill -TERM 1972489 00:02:26.108 18:05:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.108 18:05:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:26.108 18:05:34 -- pm/common@44 -- $ pid=1972512 00:02:26.108 18:05:34 -- pm/common@50 -- $ sudo -E kill -TERM 1972512 00:02:26.108 18:05:34 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:02:26.108 18:05:34 -- nvmf/common.sh@7 -- # uname -s 00:02:26.108 18:05:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:26.108 18:05:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:26.108 18:05:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:26.108 18:05:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:26.108 18:05:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:26.108 18:05:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:26.108 18:05:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:26.108 18:05:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:26.108 18:05:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:26.108 18:05:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:26.108 18:05:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:02:26.108 18:05:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:02:26.108 18:05:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:26.108 18:05:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:26.108 18:05:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:26.108 18:05:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:26.108 18:05:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:02:26.108 18:05:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:26.108 18:05:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.108 18:05:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.108 18:05:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.108 18:05:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.108 18:05:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.108 18:05:34 -- paths/export.sh@5 -- # export PATH 00:02:26.108 18:05:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.108 18:05:34 -- nvmf/common.sh@47 -- # : 0 00:02:26.108 18:05:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:26.108 18:05:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:26.108 18:05:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:26.109 18:05:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:26.109 18:05:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:26.109 18:05:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:26.109 18:05:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:26.109 18:05:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:26.109 18:05:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:26.109 18:05:34 -- spdk/autotest.sh@32 -- # uname -s 00:02:26.109 18:05:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:26.109 18:05:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:26.109 18:05:34 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:26.109 18:05:34 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:26.109 18:05:34 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:26.109 18:05:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:26.109 18:05:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:26.109 18:05:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:26.109 18:05:34 -- spdk/autotest.sh@48 -- # udevadm_pid=2040637 00:02:26.109 18:05:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:26.109 18:05:34 -- pm/common@17 -- # local monitor 00:02:26.109 18:05:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.109 18:05:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:26.109 18:05:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.109 18:05:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.109 18:05:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.109 18:05:34 -- pm/common@25 -- # sleep 1 00:02:26.109 18:05:34 -- pm/common@21 -- # date +%s 00:02:26.109 18:05:34 -- pm/common@21 -- # date +%s 00:02:26.109 18:05:34 -- pm/common@21 -- # date +%s 00:02:26.109 18:05:34 -- pm/common@21 -- # date +%s 00:02:26.109 18:05:34 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721837134 00:02:26.109 18:05:34 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721837134 00:02:26.109 18:05:34 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721837134 00:02:26.109 18:05:34 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721837134 00:02:26.367 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721837134_collect-vmstat.pm.log 00:02:26.367 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721837134_collect-cpu-load.pm.log 00:02:26.367 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721837134_collect-cpu-temp.pm.log 00:02:26.367 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721837134_collect-bmc-pm.bmc.pm.log 00:02:27.340 18:05:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:27.340 18:05:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:27.340 18:05:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:27.340 18:05:35 -- common/autotest_common.sh@10 -- # set +x 00:02:27.340 18:05:35 -- spdk/autotest.sh@59 -- # create_test_list 00:02:27.340 18:05:35 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:27.340 18:05:35 -- common/autotest_common.sh@10 -- # set +x 00:02:27.340 18:05:35 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:02:27.340 18:05:35 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:27.340 18:05:35 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:27.340 18:05:35 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:02:27.341 18:05:35 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:27.341 18:05:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:27.341 18:05:35 -- common/autotest_common.sh@1455 -- # uname 00:02:27.341 18:05:35 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:27.341 18:05:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:27.341 18:05:35 -- common/autotest_common.sh@1475 -- # uname 00:02:27.341 18:05:35 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:27.341 18:05:35 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:27.341 18:05:35 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:27.341 18:05:35 -- spdk/autotest.sh@72 -- # hash lcov 00:02:27.341 18:05:35 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:27.341 18:05:35 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:27.341 --rc lcov_branch_coverage=1 00:02:27.341 --rc lcov_function_coverage=1 00:02:27.341 --rc genhtml_branch_coverage=1 00:02:27.341 --rc genhtml_function_coverage=1 00:02:27.341 --rc genhtml_legend=1 00:02:27.341 --rc geninfo_all_blocks=1 00:02:27.341 ' 00:02:27.341 18:05:35 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:27.341 --rc lcov_branch_coverage=1 00:02:27.341 --rc lcov_function_coverage=1 00:02:27.341 --rc genhtml_branch_coverage=1 00:02:27.341 --rc genhtml_function_coverage=1 00:02:27.341 --rc genhtml_legend=1 00:02:27.341 --rc geninfo_all_blocks=1 00:02:27.341 ' 00:02:27.341 18:05:35 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:27.341 --rc lcov_branch_coverage=1 00:02:27.341 --rc lcov_function_coverage=1 00:02:27.341 --rc genhtml_branch_coverage=1 00:02:27.341 --rc genhtml_function_coverage=1 00:02:27.341 --rc genhtml_legend=1 00:02:27.341 --rc geninfo_all_blocks=1 00:02:27.341 --no-external' 00:02:27.341 18:05:35 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:27.341 --rc lcov_branch_coverage=1 00:02:27.341 --rc lcov_function_coverage=1 00:02:27.341 --rc genhtml_branch_coverage=1 00:02:27.341 --rc genhtml_function_coverage=1 00:02:27.341 --rc genhtml_legend=1 00:02:27.341 --rc geninfo_all_blocks=1 00:02:27.341 --no-external' 00:02:27.341 18:05:35 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:27.341 lcov: LCOV version 1.14 00:02:27.341 18:05:35 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:28.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:28.717 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:28.718 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:28.718 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:28.718 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:28.718 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:28.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:28.976 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:28.976 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:28.977 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:28.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:28.977 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:28.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:28.977 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:28.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:28.977 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:28.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:28.977 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:28.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:29.235 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:29.235 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:29.235 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:29.236 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:29.236 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:29.495 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:29.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:29.495 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:29.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:29.495 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:29.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:29.495 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:29.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:29.495 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:29.495 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:41.707 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:41.707 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:53.934 18:06:00 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:53.934 18:06:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:53.934 18:06:00 -- common/autotest_common.sh@10 -- # set +x 00:02:53.934 18:06:00 -- spdk/autotest.sh@91 -- # rm -f 00:02:53.934 18:06:00 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.488 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:56.488 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:56.747 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:56.747 18:06:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:56.747 18:06:05 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:56.747 18:06:05 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:56.747 18:06:05 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:56.747 18:06:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:56.747 18:06:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:56.747 18:06:05 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:56.747 18:06:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:56.747 18:06:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:56.747 18:06:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:56.747 18:06:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:56.747 18:06:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:56.747 18:06:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:56.747 18:06:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:56.747 18:06:05 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:56.747 No valid GPT data, bailing 00:02:56.747 18:06:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:56.747 18:06:05 -- scripts/common.sh@391 -- # pt= 00:02:56.747 18:06:05 -- scripts/common.sh@392 -- # return 1 00:02:56.747 18:06:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:56.747 1+0 records in 00:02:56.747 1+0 records out 00:02:56.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00311717 s, 336 MB/s 00:02:56.747 18:06:05 -- spdk/autotest.sh@118 -- # sync 00:02:56.747 18:06:05 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:56.747 18:06:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:56.747 18:06:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:04.862 18:06:12 -- spdk/autotest.sh@124 -- # uname -s 00:03:04.862 18:06:12 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:04.862 18:06:12 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:04.862 18:06:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:04.862 18:06:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:04.862 18:06:12 -- common/autotest_common.sh@10 -- # set +x 00:03:04.862 ************************************ 00:03:04.862 START TEST setup.sh 00:03:04.862 ************************************ 00:03:04.862 18:06:12 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:04.862 * Looking for test storage... 00:03:04.862 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:04.862 18:06:12 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:04.862 18:06:12 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:04.862 18:06:12 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:04.862 18:06:12 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:04.862 18:06:12 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:04.862 18:06:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:04.862 ************************************ 00:03:04.862 START TEST acl 00:03:04.862 ************************************ 00:03:04.862 18:06:12 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:04.862 * Looking for test storage... 00:03:04.862 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:04.862 18:06:12 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:04.862 18:06:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:04.862 18:06:12 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:04.862 18:06:12 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:04.862 18:06:12 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:04.863 18:06:12 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:04.863 18:06:12 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:04.863 18:06:12 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:04.863 18:06:12 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:04.863 18:06:12 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:04.863 18:06:12 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:04.863 18:06:12 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:04.863 18:06:12 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:04.863 18:06:12 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:04.863 18:06:12 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.863 18:06:12 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.216 18:06:16 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:08.216 18:06:16 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:08.216 18:06:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.216 18:06:16 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:08.216 18:06:16 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.216 18:06:16 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:03:12.406 Hugepages 00:03:12.406 node hugesize free / total 00:03:12.406 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:12.406 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:12.406 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.406 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:12.406 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:12.406 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.406 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:12.406 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:12.406 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.406 00:03:12.406 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:12.406 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:12.406 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:12.407 18:06:20 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:12.407 18:06:20 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:12.407 18:06:20 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:12.407 18:06:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:12.407 ************************************ 00:03:12.407 START TEST denied 00:03:12.407 ************************************ 00:03:12.407 18:06:20 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:12.407 18:06:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:12.407 18:06:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:12.407 18:06:20 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:12.407 18:06:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.407 18:06:20 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:16.598 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:16.599 18:06:24 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:16.599 18:06:24 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:16.599 18:06:24 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:16.599 18:06:24 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:16.599 18:06:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:16.599 18:06:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:16.599 18:06:24 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:16.599 18:06:24 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:16.599 18:06:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.599 18:06:24 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.890 00:03:21.890 real 0m8.623s 00:03:21.890 user 0m2.564s 00:03:21.890 sys 0m5.257s 00:03:21.890 18:06:29 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:21.890 18:06:29 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:21.890 ************************************ 00:03:21.890 END TEST denied 00:03:21.890 ************************************ 00:03:21.890 18:06:29 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:21.890 18:06:29 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:21.891 18:06:29 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:21.891 18:06:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:21.891 ************************************ 00:03:21.891 START TEST allowed 00:03:21.891 ************************************ 00:03:21.891 18:06:29 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:21.891 18:06:29 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:21.891 18:06:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:21.891 18:06:29 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:21.891 18:06:29 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.891 18:06:29 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:27.168 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:27.168 18:06:35 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:27.169 18:06:35 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:27.169 18:06:35 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:27.169 18:06:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.169 18:06:35 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.364 00:03:31.364 real 0m9.608s 00:03:31.364 user 0m2.438s 00:03:31.364 sys 0m5.184s 00:03:31.364 18:06:39 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.364 18:06:39 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:31.364 ************************************ 00:03:31.364 END TEST allowed 00:03:31.364 ************************************ 00:03:31.364 00:03:31.364 real 0m26.765s 00:03:31.364 user 0m7.938s 00:03:31.364 sys 0m16.268s 00:03:31.364 18:06:39 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.364 18:06:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.364 ************************************ 00:03:31.364 END TEST acl 00:03:31.364 ************************************ 00:03:31.364 18:06:39 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:31.364 18:06:39 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.364 18:06:39 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.364 18:06:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:31.364 ************************************ 00:03:31.364 START TEST hugepages 00:03:31.364 ************************************ 00:03:31.364 18:06:39 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:31.364 * Looking for test storage... 00:03:31.364 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36489176 kB' 'MemAvailable: 40579268 kB' 'Buffers: 4096 kB' 'Cached: 14988220 kB' 'SwapCached: 0 kB' 'Active: 11809256 kB' 'Inactive: 3699080 kB' 'Active(anon): 11331028 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519512 kB' 'Mapped: 220580 kB' 'Shmem: 10815008 kB' 'KReclaimable: 560276 kB' 'Slab: 1268296 kB' 'SReclaimable: 560276 kB' 'SUnreclaim: 708020 kB' 'KernelStack: 22576 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 12812360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220564 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:31.364 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.365 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:31.366 18:06:39 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:31.366 18:06:39 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.366 18:06:39 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.366 18:06:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.366 ************************************ 00:03:31.366 START TEST default_setup 00:03:31.366 ************************************ 00:03:31.366 18:06:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:31.366 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:31.366 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:31.366 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:31.366 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:31.366 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:31.366 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.367 18:06:39 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:35.646 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:35.646 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:37.026 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38726672 kB' 'MemAvailable: 42816660 kB' 'Buffers: 4096 kB' 'Cached: 14988376 kB' 'SwapCached: 0 kB' 'Active: 11824856 kB' 'Inactive: 3699080 kB' 'Active(anon): 11346628 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534156 kB' 'Mapped: 220728 kB' 'Shmem: 10815164 kB' 'KReclaimable: 560172 kB' 'Slab: 1266124 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 705952 kB' 'KernelStack: 22640 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12829304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220644 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.291 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.292 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38729568 kB' 'MemAvailable: 42819556 kB' 'Buffers: 4096 kB' 'Cached: 14988380 kB' 'SwapCached: 0 kB' 'Active: 11824376 kB' 'Inactive: 3699080 kB' 'Active(anon): 11346148 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534748 kB' 'Mapped: 220716 kB' 'Shmem: 10815168 kB' 'KReclaimable: 560172 kB' 'Slab: 1266092 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 705920 kB' 'KernelStack: 22560 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12829320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220564 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.293 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38731460 kB' 'MemAvailable: 42821448 kB' 'Buffers: 4096 kB' 'Cached: 14988396 kB' 'SwapCached: 0 kB' 'Active: 11824100 kB' 'Inactive: 3699080 kB' 'Active(anon): 11345872 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533928 kB' 'Mapped: 220716 kB' 'Shmem: 10815184 kB' 'KReclaimable: 560172 kB' 'Slab: 1266232 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706060 kB' 'KernelStack: 22608 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12827852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220644 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.294 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.295 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.296 nr_hugepages=1024 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.296 resv_hugepages=0 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.296 surplus_hugepages=0 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.296 anon_hugepages=0 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.296 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38728576 kB' 'MemAvailable: 42818564 kB' 'Buffers: 4096 kB' 'Cached: 14988416 kB' 'SwapCached: 0 kB' 'Active: 11824340 kB' 'Inactive: 3699080 kB' 'Active(anon): 11346112 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534144 kB' 'Mapped: 220716 kB' 'Shmem: 10815204 kB' 'KReclaimable: 560172 kB' 'Slab: 1266232 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706060 kB' 'KernelStack: 22544 kB' 'PageTables: 9248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12827872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220660 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.297 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.298 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21473956 kB' 'MemUsed: 11118128 kB' 'SwapCached: 0 kB' 'Active: 6751856 kB' 'Inactive: 412208 kB' 'Active(anon): 6474544 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 412208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7001456 kB' 'Mapped: 86832 kB' 'AnonPages: 165776 kB' 'Shmem: 6311936 kB' 'KernelStack: 12040 kB' 'PageTables: 4908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 384728 kB' 'Slab: 736972 kB' 'SReclaimable: 384728 kB' 'SUnreclaim: 352244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.299 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.300 node0=1024 expecting 1024 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.300 00:03:37.300 real 0m6.285s 00:03:37.300 user 0m1.636s 00:03:37.300 sys 0m2.762s 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:37.300 18:06:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:37.300 ************************************ 00:03:37.300 END TEST default_setup 00:03:37.300 ************************************ 00:03:37.300 18:06:45 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:37.300 18:06:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:37.300 18:06:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:37.300 18:06:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:37.560 ************************************ 00:03:37.560 START TEST per_node_1G_alloc 00:03:37.560 ************************************ 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.560 18:06:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:41.764 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.764 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38713572 kB' 'MemAvailable: 42803560 kB' 'Buffers: 4096 kB' 'Cached: 14988536 kB' 'SwapCached: 0 kB' 'Active: 11823796 kB' 'Inactive: 3699080 kB' 'Active(anon): 11345568 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533796 kB' 'Mapped: 219660 kB' 'Shmem: 10815324 kB' 'KReclaimable: 560172 kB' 'Slab: 1267080 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706908 kB' 'KernelStack: 22528 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12818936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220580 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.764 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.765 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38717528 kB' 'MemAvailable: 42807516 kB' 'Buffers: 4096 kB' 'Cached: 14988540 kB' 'SwapCached: 0 kB' 'Active: 11823036 kB' 'Inactive: 3699080 kB' 'Active(anon): 11344808 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533396 kB' 'Mapped: 219572 kB' 'Shmem: 10815328 kB' 'KReclaimable: 560172 kB' 'Slab: 1267040 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706868 kB' 'KernelStack: 22384 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12820412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220564 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.766 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.767 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38718500 kB' 'MemAvailable: 42808488 kB' 'Buffers: 4096 kB' 'Cached: 14988556 kB' 'SwapCached: 0 kB' 'Active: 11823280 kB' 'Inactive: 3699080 kB' 'Active(anon): 11345052 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533652 kB' 'Mapped: 219572 kB' 'Shmem: 10815344 kB' 'KReclaimable: 560172 kB' 'Slab: 1267000 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706828 kB' 'KernelStack: 22400 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12819076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220532 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.768 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.769 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:41.770 nr_hugepages=1024 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.770 resv_hugepages=0 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.770 surplus_hugepages=0 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.770 anon_hugepages=0 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.770 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38715504 kB' 'MemAvailable: 42805492 kB' 'Buffers: 4096 kB' 'Cached: 14988580 kB' 'SwapCached: 0 kB' 'Active: 11823564 kB' 'Inactive: 3699080 kB' 'Active(anon): 11345336 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533892 kB' 'Mapped: 219572 kB' 'Shmem: 10815368 kB' 'KReclaimable: 560172 kB' 'Slab: 1267000 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706828 kB' 'KernelStack: 22544 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12819100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220644 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.771 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.772 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22517012 kB' 'MemUsed: 10075072 kB' 'SwapCached: 0 kB' 'Active: 6750772 kB' 'Inactive: 412208 kB' 'Active(anon): 6473460 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 412208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7001584 kB' 'Mapped: 86316 kB' 'AnonPages: 164800 kB' 'Shmem: 6312064 kB' 'KernelStack: 12024 kB' 'PageTables: 4892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 384728 kB' 'Slab: 737512 kB' 'SReclaimable: 384728 kB' 'SUnreclaim: 352784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.773 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 16199092 kB' 'MemUsed: 11504016 kB' 'SwapCached: 0 kB' 'Active: 5073096 kB' 'Inactive: 3286872 kB' 'Active(anon): 4872180 kB' 'Inactive(anon): 0 kB' 'Active(file): 200916 kB' 'Inactive(file): 3286872 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7991112 kB' 'Mapped: 133256 kB' 'AnonPages: 369244 kB' 'Shmem: 4503324 kB' 'KernelStack: 10472 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175444 kB' 'Slab: 529488 kB' 'SReclaimable: 175444 kB' 'SUnreclaim: 354044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.774 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.775 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:41.776 node0=512 expecting 512 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:41.776 node1=512 expecting 512 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:41.776 00:03:41.776 real 0m4.391s 00:03:41.776 user 0m1.628s 00:03:41.776 sys 0m2.839s 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.776 18:06:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:41.776 ************************************ 00:03:41.776 END TEST per_node_1G_alloc 00:03:41.776 ************************************ 00:03:41.776 18:06:50 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:41.776 18:06:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.776 18:06:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.776 18:06:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.036 ************************************ 00:03:42.036 START TEST even_2G_alloc 00:03:42.036 ************************************ 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.037 18:06:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:46.243 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.243 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38749384 kB' 'MemAvailable: 42839372 kB' 'Buffers: 4096 kB' 'Cached: 14988712 kB' 'SwapCached: 0 kB' 'Active: 11825320 kB' 'Inactive: 3699080 kB' 'Active(anon): 11347092 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534904 kB' 'Mapped: 219596 kB' 'Shmem: 10815500 kB' 'KReclaimable: 560172 kB' 'Slab: 1266816 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706644 kB' 'KernelStack: 22480 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12818928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220628 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.243 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.244 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38750420 kB' 'MemAvailable: 42840408 kB' 'Buffers: 4096 kB' 'Cached: 14988716 kB' 'SwapCached: 0 kB' 'Active: 11824672 kB' 'Inactive: 3699080 kB' 'Active(anon): 11346444 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534272 kB' 'Mapped: 219588 kB' 'Shmem: 10815504 kB' 'KReclaimable: 560172 kB' 'Slab: 1266816 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706644 kB' 'KernelStack: 22480 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12818948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220596 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38749896 kB' 'MemAvailable: 42839884 kB' 'Buffers: 4096 kB' 'Cached: 14988732 kB' 'SwapCached: 0 kB' 'Active: 11824688 kB' 'Inactive: 3699080 kB' 'Active(anon): 11346460 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534272 kB' 'Mapped: 219588 kB' 'Shmem: 10815520 kB' 'KReclaimable: 560172 kB' 'Slab: 1266816 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706644 kB' 'KernelStack: 22480 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12818968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220612 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.247 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.248 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.249 nr_hugepages=1024 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.249 resv_hugepages=0 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.249 surplus_hugepages=0 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.249 anon_hugepages=0 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.249 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38750400 kB' 'MemAvailable: 42840388 kB' 'Buffers: 4096 kB' 'Cached: 14988732 kB' 'SwapCached: 0 kB' 'Active: 11824928 kB' 'Inactive: 3699080 kB' 'Active(anon): 11346700 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534516 kB' 'Mapped: 219588 kB' 'Shmem: 10815520 kB' 'KReclaimable: 560172 kB' 'Slab: 1266816 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706644 kB' 'KernelStack: 22496 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12820112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220596 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.250 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.251 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22548144 kB' 'MemUsed: 10043940 kB' 'SwapCached: 0 kB' 'Active: 6751468 kB' 'Inactive: 412208 kB' 'Active(anon): 6474156 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 412208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7001752 kB' 'Mapped: 86316 kB' 'AnonPages: 165276 kB' 'Shmem: 6312232 kB' 'KernelStack: 11944 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 384728 kB' 'Slab: 737220 kB' 'SReclaimable: 384728 kB' 'SUnreclaim: 352492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.252 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:46.253 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 16204112 kB' 'MemUsed: 11498996 kB' 'SwapCached: 0 kB' 'Active: 5073472 kB' 'Inactive: 3286872 kB' 'Active(anon): 4872556 kB' 'Inactive(anon): 0 kB' 'Active(file): 200916 kB' 'Inactive(file): 3286872 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7991136 kB' 'Mapped: 133272 kB' 'AnonPages: 369252 kB' 'Shmem: 4503348 kB' 'KernelStack: 10440 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175444 kB' 'Slab: 529596 kB' 'SReclaimable: 175444 kB' 'SUnreclaim: 354152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.254 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:46.255 node0=512 expecting 512 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:46.255 node1=512 expecting 512 00:03:46.255 18:06:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:46.256 00:03:46.256 real 0m4.413s 00:03:46.256 user 0m1.624s 00:03:46.256 sys 0m2.874s 00:03:46.256 18:06:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.256 18:06:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.256 ************************************ 00:03:46.256 END TEST even_2G_alloc 00:03:46.256 ************************************ 00:03:46.515 18:06:54 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:46.515 18:06:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.515 18:06:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.515 18:06:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.515 ************************************ 00:03:46.515 START TEST odd_alloc 00:03:46.515 ************************************ 00:03:46.515 18:06:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:46.515 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:46.515 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:46.515 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.516 18:06:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:50.717 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.717 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:50.717 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:50.717 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:50.717 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.717 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.717 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:50.717 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:50.717 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:50.717 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.717 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.717 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38735560 kB' 'MemAvailable: 42825548 kB' 'Buffers: 4096 kB' 'Cached: 14988876 kB' 'SwapCached: 0 kB' 'Active: 11831352 kB' 'Inactive: 3699080 kB' 'Active(anon): 11353124 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540364 kB' 'Mapped: 220184 kB' 'Shmem: 10815664 kB' 'KReclaimable: 560172 kB' 'Slab: 1266664 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706492 kB' 'KernelStack: 22480 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12825888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220632 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.718 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38740152 kB' 'MemAvailable: 42830140 kB' 'Buffers: 4096 kB' 'Cached: 14988880 kB' 'SwapCached: 0 kB' 'Active: 11826372 kB' 'Inactive: 3699080 kB' 'Active(anon): 11348144 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535432 kB' 'Mapped: 219680 kB' 'Shmem: 10815668 kB' 'KReclaimable: 560172 kB' 'Slab: 1266664 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706492 kB' 'KernelStack: 22496 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12819784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220612 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.719 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.720 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38741524 kB' 'MemAvailable: 42831512 kB' 'Buffers: 4096 kB' 'Cached: 14988896 kB' 'SwapCached: 0 kB' 'Active: 11825564 kB' 'Inactive: 3699080 kB' 'Active(anon): 11347336 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535084 kB' 'Mapped: 219600 kB' 'Shmem: 10815684 kB' 'KReclaimable: 560172 kB' 'Slab: 1266632 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706460 kB' 'KernelStack: 22464 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12819804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220612 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.721 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.722 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:50.723 nr_hugepages=1025 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.723 resv_hugepages=0 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.723 surplus_hugepages=0 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.723 anon_hugepages=0 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38741624 kB' 'MemAvailable: 42831612 kB' 'Buffers: 4096 kB' 'Cached: 14988916 kB' 'SwapCached: 0 kB' 'Active: 11825412 kB' 'Inactive: 3699080 kB' 'Active(anon): 11347184 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534836 kB' 'Mapped: 219600 kB' 'Shmem: 10815704 kB' 'KReclaimable: 560172 kB' 'Slab: 1266628 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706456 kB' 'KernelStack: 22480 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12819828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220612 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.723 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.724 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22534856 kB' 'MemUsed: 10057228 kB' 'SwapCached: 0 kB' 'Active: 6750680 kB' 'Inactive: 412208 kB' 'Active(anon): 6473368 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 412208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7001836 kB' 'Mapped: 86316 kB' 'AnonPages: 164320 kB' 'Shmem: 6312316 kB' 'KernelStack: 12008 kB' 'PageTables: 4732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 384728 kB' 'Slab: 737132 kB' 'SReclaimable: 384728 kB' 'SUnreclaim: 352404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.725 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 16206768 kB' 'MemUsed: 11496340 kB' 'SwapCached: 0 kB' 'Active: 5074732 kB' 'Inactive: 3286872 kB' 'Active(anon): 4873816 kB' 'Inactive(anon): 0 kB' 'Active(file): 200916 kB' 'Inactive(file): 3286872 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7991176 kB' 'Mapped: 133284 kB' 'AnonPages: 370516 kB' 'Shmem: 4503388 kB' 'KernelStack: 10472 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175444 kB' 'Slab: 529496 kB' 'SReclaimable: 175444 kB' 'SUnreclaim: 354052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.726 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.727 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:50.728 node0=512 expecting 513 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:50.728 node1=513 expecting 512 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:50.728 00:03:50.728 real 0m4.084s 00:03:50.728 user 0m1.496s 00:03:50.728 sys 0m2.671s 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:50.728 18:06:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:50.728 ************************************ 00:03:50.728 END TEST odd_alloc 00:03:50.728 ************************************ 00:03:50.728 18:06:58 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:50.728 18:06:58 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.728 18:06:58 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.728 18:06:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.728 ************************************ 00:03:50.728 START TEST custom_alloc 00:03:50.728 ************************************ 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:50.728 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:50.729 18:06:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:50.729 18:06:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.729 18:06:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:54.015 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.015 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37680908 kB' 'MemAvailable: 41770896 kB' 'Buffers: 4096 kB' 'Cached: 14989052 kB' 'SwapCached: 0 kB' 'Active: 11827216 kB' 'Inactive: 3699080 kB' 'Active(anon): 11348988 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536464 kB' 'Mapped: 219564 kB' 'Shmem: 10815840 kB' 'KReclaimable: 560172 kB' 'Slab: 1266620 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706448 kB' 'KernelStack: 22496 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12823560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220788 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.278 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.279 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37681044 kB' 'MemAvailable: 41771032 kB' 'Buffers: 4096 kB' 'Cached: 14989056 kB' 'SwapCached: 0 kB' 'Active: 11827484 kB' 'Inactive: 3699080 kB' 'Active(anon): 11349256 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536636 kB' 'Mapped: 219616 kB' 'Shmem: 10815844 kB' 'KReclaimable: 560172 kB' 'Slab: 1266708 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706536 kB' 'KernelStack: 22512 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12823416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.280 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.281 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37680308 kB' 'MemAvailable: 41770296 kB' 'Buffers: 4096 kB' 'Cached: 14989072 kB' 'SwapCached: 0 kB' 'Active: 11827056 kB' 'Inactive: 3699080 kB' 'Active(anon): 11348828 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536216 kB' 'Mapped: 219616 kB' 'Shmem: 10815860 kB' 'KReclaimable: 560172 kB' 'Slab: 1266708 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706536 kB' 'KernelStack: 22464 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12821988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220644 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.282 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.283 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:54.284 nr_hugepages=1536 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.284 resv_hugepages=0 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.284 surplus_hugepages=0 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.284 anon_hugepages=0 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 37679832 kB' 'MemAvailable: 41769820 kB' 'Buffers: 4096 kB' 'Cached: 14989088 kB' 'SwapCached: 0 kB' 'Active: 11827300 kB' 'Inactive: 3699080 kB' 'Active(anon): 11349072 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536460 kB' 'Mapped: 219616 kB' 'Shmem: 10815876 kB' 'KReclaimable: 560172 kB' 'Slab: 1266708 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 706536 kB' 'KernelStack: 22592 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12822008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220756 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.284 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.285 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22536368 kB' 'MemUsed: 10055716 kB' 'SwapCached: 0 kB' 'Active: 6752988 kB' 'Inactive: 412208 kB' 'Active(anon): 6475676 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 412208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7001964 kB' 'Mapped: 86316 kB' 'AnonPages: 166372 kB' 'Shmem: 6312444 kB' 'KernelStack: 12024 kB' 'PageTables: 4792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 384728 kB' 'Slab: 737164 kB' 'SReclaimable: 384728 kB' 'SUnreclaim: 352436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.286 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.287 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 15146660 kB' 'MemUsed: 12556448 kB' 'SwapCached: 0 kB' 'Active: 5074184 kB' 'Inactive: 3286872 kB' 'Active(anon): 4873268 kB' 'Inactive(anon): 0 kB' 'Active(file): 200916 kB' 'Inactive(file): 3286872 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7991220 kB' 'Mapped: 133300 kB' 'AnonPages: 369956 kB' 'Shmem: 4503432 kB' 'KernelStack: 10584 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175444 kB' 'Slab: 529540 kB' 'SReclaimable: 175444 kB' 'SUnreclaim: 354096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.288 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.548 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.549 node0=512 expecting 512 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:54.549 node1=1024 expecting 1024 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:54.549 00:03:54.549 real 0m3.850s 00:03:54.549 user 0m1.335s 00:03:54.549 sys 0m2.521s 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.549 18:07:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.549 ************************************ 00:03:54.549 END TEST custom_alloc 00:03:54.549 ************************************ 00:03:54.549 18:07:02 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:54.549 18:07:02 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.549 18:07:02 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.549 18:07:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.549 ************************************ 00:03:54.549 START TEST no_shrink_alloc 00:03:54.549 ************************************ 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.549 18:07:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:58.767 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:58.767 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.767 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38703820 kB' 'MemAvailable: 42793808 kB' 'Buffers: 4096 kB' 'Cached: 14989212 kB' 'SwapCached: 0 kB' 'Active: 11827652 kB' 'Inactive: 3699080 kB' 'Active(anon): 11349424 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536640 kB' 'Mapped: 219648 kB' 'Shmem: 10816000 kB' 'KReclaimable: 560172 kB' 'Slab: 1265792 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 705620 kB' 'KernelStack: 22432 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12821380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220756 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.768 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38704468 kB' 'MemAvailable: 42794456 kB' 'Buffers: 4096 kB' 'Cached: 14989216 kB' 'SwapCached: 0 kB' 'Active: 11827276 kB' 'Inactive: 3699080 kB' 'Active(anon): 11349048 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536264 kB' 'Mapped: 219632 kB' 'Shmem: 10816004 kB' 'KReclaimable: 560172 kB' 'Slab: 1265792 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 705620 kB' 'KernelStack: 22416 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12821396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.769 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.770 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.771 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38705096 kB' 'MemAvailable: 42795084 kB' 'Buffers: 4096 kB' 'Cached: 14989236 kB' 'SwapCached: 0 kB' 'Active: 11827336 kB' 'Inactive: 3699080 kB' 'Active(anon): 11349108 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535816 kB' 'Mapped: 219632 kB' 'Shmem: 10816024 kB' 'KReclaimable: 560172 kB' 'Slab: 1265868 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 705696 kB' 'KernelStack: 22400 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12821420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.772 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.773 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.774 nr_hugepages=1024 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.774 resv_hugepages=0 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.774 surplus_hugepages=0 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.774 anon_hugepages=0 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38704272 kB' 'MemAvailable: 42794260 kB' 'Buffers: 4096 kB' 'Cached: 14989256 kB' 'SwapCached: 0 kB' 'Active: 11827456 kB' 'Inactive: 3699080 kB' 'Active(anon): 11349228 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536380 kB' 'Mapped: 219628 kB' 'Shmem: 10816044 kB' 'KReclaimable: 560172 kB' 'Slab: 1265868 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 705696 kB' 'KernelStack: 22416 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12821244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21480648 kB' 'MemUsed: 11111436 kB' 'SwapCached: 0 kB' 'Active: 6753052 kB' 'Inactive: 412208 kB' 'Active(anon): 6475740 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 412208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7002128 kB' 'Mapped: 86316 kB' 'AnonPages: 166320 kB' 'Shmem: 6312608 kB' 'KernelStack: 11944 kB' 'PageTables: 4748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 384728 kB' 'Slab: 736536 kB' 'SReclaimable: 384728 kB' 'SUnreclaim: 351808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.776 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.778 node0=1024 expecting 1024 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.778 18:07:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:02.971 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:02.971 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.971 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38701512 kB' 'MemAvailable: 42791500 kB' 'Buffers: 4096 kB' 'Cached: 14989364 kB' 'SwapCached: 0 kB' 'Active: 11826716 kB' 'Inactive: 3699080 kB' 'Active(anon): 11348488 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535156 kB' 'Mapped: 219656 kB' 'Shmem: 10816152 kB' 'KReclaimable: 560172 kB' 'Slab: 1265840 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 705668 kB' 'KernelStack: 22448 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12822460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.971 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.972 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38701812 kB' 'MemAvailable: 42791800 kB' 'Buffers: 4096 kB' 'Cached: 14989368 kB' 'SwapCached: 0 kB' 'Active: 11826392 kB' 'Inactive: 3699080 kB' 'Active(anon): 11348164 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535324 kB' 'Mapped: 219636 kB' 'Shmem: 10816156 kB' 'KReclaimable: 560172 kB' 'Slab: 1265880 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 705708 kB' 'KernelStack: 22464 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12822480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220692 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.973 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.974 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38702064 kB' 'MemAvailable: 42792052 kB' 'Buffers: 4096 kB' 'Cached: 14989384 kB' 'SwapCached: 0 kB' 'Active: 11826408 kB' 'Inactive: 3699080 kB' 'Active(anon): 11348180 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535328 kB' 'Mapped: 219636 kB' 'Shmem: 10816172 kB' 'KReclaimable: 560172 kB' 'Slab: 1265880 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 705708 kB' 'KernelStack: 22464 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12822500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220692 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.975 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.976 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.977 nr_hugepages=1024 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.977 resv_hugepages=0 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.977 surplus_hugepages=0 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.977 anon_hugepages=0 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 38702656 kB' 'MemAvailable: 42792644 kB' 'Buffers: 4096 kB' 'Cached: 14989408 kB' 'SwapCached: 0 kB' 'Active: 11826432 kB' 'Inactive: 3699080 kB' 'Active(anon): 11348204 kB' 'Inactive(anon): 0 kB' 'Active(file): 478228 kB' 'Inactive(file): 3699080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535328 kB' 'Mapped: 219636 kB' 'Shmem: 10816196 kB' 'KReclaimable: 560172 kB' 'Slab: 1265880 kB' 'SReclaimable: 560172 kB' 'SUnreclaim: 705708 kB' 'KernelStack: 22464 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12822524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220692 kB' 'VmallocChunk: 0 kB' 'Percpu: 115136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.977 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.978 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21480768 kB' 'MemUsed: 11111316 kB' 'SwapCached: 0 kB' 'Active: 6753320 kB' 'Inactive: 412208 kB' 'Active(anon): 6476008 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 412208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7002264 kB' 'Mapped: 86316 kB' 'AnonPages: 166428 kB' 'Shmem: 6312744 kB' 'KernelStack: 11992 kB' 'PageTables: 4692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 384728 kB' 'Slab: 736728 kB' 'SReclaimable: 384728 kB' 'SUnreclaim: 352000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.979 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.980 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.981 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.981 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.981 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.981 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.981 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.981 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.981 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.981 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.981 node0=1024 expecting 1024 00:04:02.981 18:07:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.981 00:04:02.981 real 0m8.360s 00:04:02.981 user 0m3.097s 00:04:02.981 sys 0m5.382s 00:04:02.981 18:07:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.981 18:07:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.981 ************************************ 00:04:02.981 END TEST no_shrink_alloc 00:04:02.981 ************************************ 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:02.981 18:07:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:02.981 00:04:02.981 real 0m32.075s 00:04:02.981 user 0m11.083s 00:04:02.981 sys 0m19.525s 00:04:02.981 18:07:11 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.981 18:07:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.981 ************************************ 00:04:02.981 END TEST hugepages 00:04:02.981 ************************************ 00:04:02.981 18:07:11 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:02.981 18:07:11 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.981 18:07:11 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.981 18:07:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.981 ************************************ 00:04:02.981 START TEST driver 00:04:02.981 ************************************ 00:04:02.981 18:07:11 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:03.241 * Looking for test storage... 00:04:03.241 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:03.241 18:07:11 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:03.241 18:07:11 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.241 18:07:11 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.820 18:07:17 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:09.820 18:07:17 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.820 18:07:17 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.820 18:07:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:09.820 ************************************ 00:04:09.820 START TEST guess_driver 00:04:09.820 ************************************ 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:09.820 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:09.820 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:09.820 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:09.820 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:09.820 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:09.820 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:09.820 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:09.820 Looking for driver=vfio-pci 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.820 18:07:17 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.113 18:07:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.025 18:07:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.025 18:07:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.025 18:07:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.025 18:07:23 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:15.025 18:07:23 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:15.025 18:07:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.025 18:07:23 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.304 00:04:20.304 real 0m11.561s 00:04:20.304 user 0m2.938s 00:04:20.304 sys 0m5.967s 00:04:20.304 18:07:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.304 18:07:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.304 ************************************ 00:04:20.304 END TEST guess_driver 00:04:20.304 ************************************ 00:04:20.304 00:04:20.304 real 0m17.271s 00:04:20.304 user 0m4.598s 00:04:20.304 sys 0m9.255s 00:04:20.304 18:07:28 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.304 18:07:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.304 ************************************ 00:04:20.304 END TEST driver 00:04:20.304 ************************************ 00:04:20.304 18:07:28 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:20.304 18:07:28 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.304 18:07:28 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.304 18:07:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:20.304 ************************************ 00:04:20.304 START TEST devices 00:04:20.304 ************************************ 00:04:20.304 18:07:28 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:20.563 * Looking for test storage... 00:04:20.563 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:20.563 18:07:28 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:20.563 18:07:28 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:20.563 18:07:28 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.563 18:07:28 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:25.880 18:07:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:25.880 18:07:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:25.880 18:07:33 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:25.880 18:07:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:25.880 18:07:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:25.880 18:07:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:25.880 18:07:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.880 18:07:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:25.880 18:07:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:25.880 18:07:33 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:25.880 No valid GPT data, bailing 00:04:25.880 18:07:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:25.880 18:07:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:25.880 18:07:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:25.880 18:07:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:25.880 18:07:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:25.880 18:07:33 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:25.880 18:07:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:25.880 18:07:33 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.880 18:07:33 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.880 18:07:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:25.880 ************************************ 00:04:25.880 START TEST nvme_mount 00:04:25.880 ************************************ 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:25.880 18:07:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:26.139 Creating new GPT entries in memory. 00:04:26.139 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:26.139 other utilities. 00:04:26.139 18:07:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:26.139 18:07:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.139 18:07:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.139 18:07:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.139 18:07:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:27.077 Creating new GPT entries in memory. 00:04:27.077 The operation has completed successfully. 00:04:27.077 18:07:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.077 18:07:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.077 18:07:35 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2080977 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.337 18:07:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:31.534 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:31.534 18:07:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:31.794 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:31.794 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:31.794 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:31.794 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.794 18:07:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.991 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.992 18:07:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.285 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.285 00:04:39.285 real 0m13.951s 00:04:39.285 user 0m4.022s 00:04:39.285 sys 0m7.755s 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.285 18:07:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:39.285 ************************************ 00:04:39.285 END TEST nvme_mount 00:04:39.286 ************************************ 00:04:39.286 18:07:47 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:39.286 18:07:47 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.286 18:07:47 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.286 18:07:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:39.286 ************************************ 00:04:39.286 START TEST dm_mount 00:04:39.286 ************************************ 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:39.286 18:07:47 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:40.223 Creating new GPT entries in memory. 00:04:40.223 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:40.223 other utilities. 00:04:40.223 18:07:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:40.223 18:07:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.223 18:07:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.223 18:07:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.223 18:07:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:41.161 Creating new GPT entries in memory. 00:04:41.161 The operation has completed successfully. 00:04:41.161 18:07:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.161 18:07:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.161 18:07:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.161 18:07:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.161 18:07:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:42.538 The operation has completed successfully. 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2086165 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:42.538 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:42.539 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:42.539 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.539 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.539 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.539 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:42.539 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.539 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.539 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:42.539 18:07:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.539 18:07:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.539 18:07:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:45.830 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.090 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.349 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.349 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:46.349 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:46.349 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.349 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.349 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:46.349 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:46.349 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:46.349 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:46.349 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:46.350 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:46.350 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.350 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.350 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.350 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.350 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:46.350 18:07:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.350 18:07:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.350 18:07:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:49.710 18:07:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.710 18:07:58 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.710 18:07:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:49.710 18:07:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:49.710 18:07:58 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:49.710 18:07:58 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:49.710 18:07:58 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:49.710 18:07:58 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:49.710 18:07:58 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.710 18:07:58 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:49.710 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:49.710 18:07:58 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:49.711 18:07:58 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:49.711 00:04:49.711 real 0m10.509s 00:04:49.711 user 0m2.422s 00:04:49.711 sys 0m5.037s 00:04:49.711 18:07:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.711 18:07:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:49.711 ************************************ 00:04:49.711 END TEST dm_mount 00:04:49.711 ************************************ 00:04:49.711 18:07:58 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:49.711 18:07:58 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:49.711 18:07:58 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.711 18:07:58 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.711 18:07:58 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:49.711 18:07:58 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:49.711 18:07:58 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:49.970 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:49.970 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:49.970 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:49.970 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:49.970 18:07:58 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:49.970 18:07:58 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:49.970 18:07:58 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:49.970 18:07:58 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.970 18:07:58 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:49.970 18:07:58 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:49.970 18:07:58 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:49.970 00:04:49.970 real 0m29.663s 00:04:49.970 user 0m8.233s 00:04:49.970 sys 0m16.148s 00:04:49.970 18:07:58 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.970 18:07:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:49.970 ************************************ 00:04:49.970 END TEST devices 00:04:49.970 ************************************ 00:04:49.970 00:04:49.971 real 1m46.205s 00:04:49.971 user 0m32.019s 00:04:49.971 sys 1m1.495s 00:04:49.971 18:07:58 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.971 18:07:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:49.971 ************************************ 00:04:49.971 END TEST setup.sh 00:04:49.971 ************************************ 00:04:50.230 18:07:58 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:04:53.523 Hugepages 00:04:53.523 node hugesize free / total 00:04:53.523 node0 1048576kB 0 / 0 00:04:53.523 node0 2048kB 1024 / 1024 00:04:53.523 node1 1048576kB 0 / 0 00:04:53.523 node1 2048kB 1024 / 1024 00:04:53.523 00:04:53.523 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:53.523 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:53.523 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:53.523 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:53.523 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:53.523 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:53.523 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:53.523 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:53.523 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:53.523 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:53.523 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:53.524 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:53.524 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:53.524 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:53.524 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:53.524 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:53.524 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:53.783 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:53.783 18:08:02 -- spdk/autotest.sh@130 -- # uname -s 00:04:53.783 18:08:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:53.783 18:08:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:53.783 18:08:02 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:57.976 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.976 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:59.881 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:00.141 18:08:08 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:01.080 18:08:09 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:01.080 18:08:09 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:01.080 18:08:09 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:01.080 18:08:09 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:01.080 18:08:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:01.080 18:08:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:01.080 18:08:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.080 18:08:09 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:01.080 18:08:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:01.080 18:08:09 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:01.080 18:08:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:01.080 18:08:09 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.276 Waiting for block devices as requested 00:05:05.276 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:05.276 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:05.276 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:05.276 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:05.536 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:05.536 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:05.536 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:05.536 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:05.795 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:05.795 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:05.795 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:06.055 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:06.055 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:06.055 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:06.315 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:06.315 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:06.574 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:06.574 18:08:15 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:06.574 18:08:15 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:06.574 18:08:15 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:05:06.574 18:08:15 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:06.574 18:08:15 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:06.574 18:08:15 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:06.574 18:08:15 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:06.574 18:08:15 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:06.574 18:08:15 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:06.574 18:08:15 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:06.574 18:08:15 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:06.574 18:08:15 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:06.574 18:08:15 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:06.574 18:08:15 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:06.574 18:08:15 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:06.574 18:08:15 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:06.574 18:08:15 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:06.574 18:08:15 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:06.574 18:08:15 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:06.574 18:08:15 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:06.574 18:08:15 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:06.574 18:08:15 -- common/autotest_common.sh@1557 -- # continue 00:05:06.574 18:08:15 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:06.574 18:08:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.574 18:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:06.574 18:08:15 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:06.574 18:08:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:06.574 18:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:06.834 18:08:15 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:11.029 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:11.029 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:12.456 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:12.716 18:08:21 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:12.716 18:08:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:12.716 18:08:21 -- common/autotest_common.sh@10 -- # set +x 00:05:12.716 18:08:21 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:12.716 18:08:21 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:12.716 18:08:21 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:12.716 18:08:21 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:12.716 18:08:21 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:12.716 18:08:21 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:12.716 18:08:21 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:12.716 18:08:21 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:12.716 18:08:21 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.716 18:08:21 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:12.716 18:08:21 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:12.716 18:08:21 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:12.716 18:08:21 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:12.716 18:08:21 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:12.716 18:08:21 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:12.716 18:08:21 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:12.716 18:08:21 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:12.716 18:08:21 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:12.976 18:08:21 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:05:12.976 18:08:21 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:05:12.976 18:08:21 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2097241 00:05:12.976 18:08:21 -- common/autotest_common.sh@1598 -- # waitforlisten 2097241 00:05:12.976 18:08:21 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.976 18:08:21 -- common/autotest_common.sh@831 -- # '[' -z 2097241 ']' 00:05:12.976 18:08:21 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.976 18:08:21 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.976 18:08:21 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.976 18:08:21 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.976 18:08:21 -- common/autotest_common.sh@10 -- # set +x 00:05:12.976 [2024-07-24 18:08:21.374925] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:05:12.976 [2024-07-24 18:08:21.374976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2097241 ] 00:05:12.976 [2024-07-24 18:08:21.460445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.976 [2024-07-24 18:08:21.533035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.912 18:08:22 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.912 18:08:22 -- common/autotest_common.sh@864 -- # return 0 00:05:13.912 18:08:22 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:13.912 18:08:22 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:13.912 18:08:22 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:17.201 nvme0n1 00:05:17.201 18:08:25 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:17.201 [2024-07-24 18:08:25.322444] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:17.201 request: 00:05:17.201 { 00:05:17.201 "nvme_ctrlr_name": "nvme0", 00:05:17.201 "password": "test", 00:05:17.201 "method": "bdev_nvme_opal_revert", 00:05:17.201 "req_id": 1 00:05:17.201 } 00:05:17.201 Got JSON-RPC error response 00:05:17.201 response: 00:05:17.201 { 00:05:17.201 "code": -32602, 00:05:17.201 "message": "Invalid parameters" 00:05:17.201 } 00:05:17.201 18:08:25 -- common/autotest_common.sh@1604 -- # true 00:05:17.201 18:08:25 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:17.201 18:08:25 -- common/autotest_common.sh@1608 -- # killprocess 2097241 00:05:17.201 18:08:25 -- common/autotest_common.sh@950 -- # '[' -z 2097241 ']' 00:05:17.201 18:08:25 -- common/autotest_common.sh@954 -- # kill -0 2097241 00:05:17.201 18:08:25 -- common/autotest_common.sh@955 -- # uname 00:05:17.201 18:08:25 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.201 18:08:25 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2097241 00:05:17.201 18:08:25 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.201 18:08:25 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.201 18:08:25 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2097241' 00:05:17.201 killing process with pid 2097241 00:05:17.201 18:08:25 -- common/autotest_common.sh@969 -- # kill 2097241 00:05:17.201 18:08:25 -- common/autotest_common.sh@974 -- # wait 2097241 00:05:19.735 18:08:27 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:19.735 18:08:27 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:19.735 18:08:27 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:05:19.735 18:08:27 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:05:19.735 18:08:27 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:05:20.303 Restarting all devices. 00:05:26.875 lstat() error: No such file or directory 00:05:26.876 QAT Error: No GENERAL section found 00:05:26.876 Failed to configure qat_dev0 00:05:26.876 lstat() error: No such file or directory 00:05:26.876 QAT Error: No GENERAL section found 00:05:26.876 Failed to configure qat_dev1 00:05:26.876 lstat() error: No such file or directory 00:05:26.876 QAT Error: No GENERAL section found 00:05:26.876 Failed to configure qat_dev2 00:05:26.876 lstat() error: No such file or directory 00:05:26.876 QAT Error: No GENERAL section found 00:05:26.876 Failed to configure qat_dev3 00:05:26.876 lstat() error: No such file or directory 00:05:26.876 QAT Error: No GENERAL section found 00:05:26.876 Failed to configure qat_dev4 00:05:26.876 enable sriov 00:05:26.876 Checking status of all devices. 00:05:26.876 There is 5 QAT acceleration device(s) in the system: 00:05:26.876 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:05:26.876 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:05:26.876 qat_dev2 - type: c6xx, inst_id: 2, node_id: 1, bsf: 0000:b1:00.0, #accel: 5 #engines: 10 state: down 00:05:26.876 qat_dev3 - type: c6xx, inst_id: 3, node_id: 1, bsf: 0000:b3:00.0, #accel: 5 #engines: 10 state: down 00:05:26.876 qat_dev4 - type: c6xx, inst_id: 4, node_id: 1, bsf: 0000:b5:00.0, #accel: 5 #engines: 10 state: down 00:05:26.876 0000:3d:00.0 set to 16 VFs 00:05:27.813 0000:3f:00.0 set to 16 VFs 00:05:28.382 0000:b1:00.0 set to 16 VFs 00:05:29.319 0000:b3:00.0 set to 16 VFs 00:05:29.888 0000:b5:00.0 set to 16 VFs 00:05:32.424 Properly configured the qat device with driver uio_pci_generic. 00:05:32.424 18:08:40 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:32.424 18:08:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:32.424 18:08:40 -- common/autotest_common.sh@10 -- # set +x 00:05:32.424 18:08:40 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:32.424 18:08:40 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:32.424 18:08:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.424 18:08:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.424 18:08:40 -- common/autotest_common.sh@10 -- # set +x 00:05:32.424 ************************************ 00:05:32.424 START TEST env 00:05:32.424 ************************************ 00:05:32.424 18:08:40 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:32.424 * Looking for test storage... 00:05:32.424 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:05:32.424 18:08:40 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:32.424 18:08:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.424 18:08:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.424 18:08:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.424 ************************************ 00:05:32.424 START TEST env_memory 00:05:32.424 ************************************ 00:05:32.424 18:08:40 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:32.424 00:05:32.424 00:05:32.424 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.424 http://cunit.sourceforge.net/ 00:05:32.424 00:05:32.424 00:05:32.424 Suite: memory 00:05:32.424 Test: alloc and free memory map ...[2024-07-24 18:08:40.945214] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:32.424 passed 00:05:32.424 Test: mem map translation ...[2024-07-24 18:08:40.963436] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:32.424 [2024-07-24 18:08:40.963453] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:32.424 [2024-07-24 18:08:40.963490] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:32.424 [2024-07-24 18:08:40.963500] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:32.424 passed 00:05:32.424 Test: mem map registration ...[2024-07-24 18:08:41.000102] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:32.424 [2024-07-24 18:08:41.000119] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:32.424 passed 00:05:32.686 Test: mem map adjacent registrations ...passed 00:05:32.686 00:05:32.686 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.686 suites 1 1 n/a 0 0 00:05:32.686 tests 4 4 4 0 0 00:05:32.686 asserts 152 152 152 0 n/a 00:05:32.686 00:05:32.686 Elapsed time = 0.132 seconds 00:05:32.686 00:05:32.686 real 0m0.145s 00:05:32.686 user 0m0.135s 00:05:32.686 sys 0m0.010s 00:05:32.686 18:08:41 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.686 18:08:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:32.686 ************************************ 00:05:32.686 END TEST env_memory 00:05:32.686 ************************************ 00:05:32.686 18:08:41 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:32.686 18:08:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.686 18:08:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.686 18:08:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.686 ************************************ 00:05:32.686 START TEST env_vtophys 00:05:32.686 ************************************ 00:05:32.686 18:08:41 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:32.686 EAL: lib.eal log level changed from notice to debug 00:05:32.686 EAL: Detected lcore 0 as core 0 on socket 0 00:05:32.686 EAL: Detected lcore 1 as core 1 on socket 0 00:05:32.686 EAL: Detected lcore 2 as core 2 on socket 0 00:05:32.686 EAL: Detected lcore 3 as core 3 on socket 0 00:05:32.686 EAL: Detected lcore 4 as core 4 on socket 0 00:05:32.686 EAL: Detected lcore 5 as core 5 on socket 0 00:05:32.686 EAL: Detected lcore 6 as core 6 on socket 0 00:05:32.686 EAL: Detected lcore 7 as core 8 on socket 0 00:05:32.686 EAL: Detected lcore 8 as core 9 on socket 0 00:05:32.686 EAL: Detected lcore 9 as core 10 on socket 0 00:05:32.686 EAL: Detected lcore 10 as core 11 on socket 0 00:05:32.686 EAL: Detected lcore 11 as core 12 on socket 0 00:05:32.686 EAL: Detected lcore 12 as core 13 on socket 0 00:05:32.686 EAL: Detected lcore 13 as core 14 on socket 0 00:05:32.686 EAL: Detected lcore 14 as core 16 on socket 0 00:05:32.686 EAL: Detected lcore 15 as core 17 on socket 0 00:05:32.686 EAL: Detected lcore 16 as core 18 on socket 0 00:05:32.686 EAL: Detected lcore 17 as core 19 on socket 0 00:05:32.686 EAL: Detected lcore 18 as core 20 on socket 0 00:05:32.686 EAL: Detected lcore 19 as core 21 on socket 0 00:05:32.686 EAL: Detected lcore 20 as core 22 on socket 0 00:05:32.686 EAL: Detected lcore 21 as core 24 on socket 0 00:05:32.686 EAL: Detected lcore 22 as core 25 on socket 0 00:05:32.686 EAL: Detected lcore 23 as core 26 on socket 0 00:05:32.686 EAL: Detected lcore 24 as core 27 on socket 0 00:05:32.686 EAL: Detected lcore 25 as core 28 on socket 0 00:05:32.686 EAL: Detected lcore 26 as core 29 on socket 0 00:05:32.686 EAL: Detected lcore 27 as core 30 on socket 0 00:05:32.686 EAL: Detected lcore 28 as core 0 on socket 1 00:05:32.686 EAL: Detected lcore 29 as core 1 on socket 1 00:05:32.686 EAL: Detected lcore 30 as core 2 on socket 1 00:05:32.686 EAL: Detected lcore 31 as core 3 on socket 1 00:05:32.686 EAL: Detected lcore 32 as core 4 on socket 1 00:05:32.686 EAL: Detected lcore 33 as core 5 on socket 1 00:05:32.686 EAL: Detected lcore 34 as core 6 on socket 1 00:05:32.686 EAL: Detected lcore 35 as core 8 on socket 1 00:05:32.686 EAL: Detected lcore 36 as core 9 on socket 1 00:05:32.686 EAL: Detected lcore 37 as core 10 on socket 1 00:05:32.686 EAL: Detected lcore 38 as core 11 on socket 1 00:05:32.686 EAL: Detected lcore 39 as core 12 on socket 1 00:05:32.686 EAL: Detected lcore 40 as core 13 on socket 1 00:05:32.686 EAL: Detected lcore 41 as core 14 on socket 1 00:05:32.686 EAL: Detected lcore 42 as core 16 on socket 1 00:05:32.686 EAL: Detected lcore 43 as core 17 on socket 1 00:05:32.686 EAL: Detected lcore 44 as core 18 on socket 1 00:05:32.686 EAL: Detected lcore 45 as core 19 on socket 1 00:05:32.686 EAL: Detected lcore 46 as core 20 on socket 1 00:05:32.686 EAL: Detected lcore 47 as core 21 on socket 1 00:05:32.686 EAL: Detected lcore 48 as core 22 on socket 1 00:05:32.686 EAL: Detected lcore 49 as core 24 on socket 1 00:05:32.686 EAL: Detected lcore 50 as core 25 on socket 1 00:05:32.686 EAL: Detected lcore 51 as core 26 on socket 1 00:05:32.686 EAL: Detected lcore 52 as core 27 on socket 1 00:05:32.686 EAL: Detected lcore 53 as core 28 on socket 1 00:05:32.686 EAL: Detected lcore 54 as core 29 on socket 1 00:05:32.686 EAL: Detected lcore 55 as core 30 on socket 1 00:05:32.686 EAL: Detected lcore 56 as core 0 on socket 0 00:05:32.686 EAL: Detected lcore 57 as core 1 on socket 0 00:05:32.686 EAL: Detected lcore 58 as core 2 on socket 0 00:05:32.686 EAL: Detected lcore 59 as core 3 on socket 0 00:05:32.686 EAL: Detected lcore 60 as core 4 on socket 0 00:05:32.686 EAL: Detected lcore 61 as core 5 on socket 0 00:05:32.686 EAL: Detected lcore 62 as core 6 on socket 0 00:05:32.686 EAL: Detected lcore 63 as core 8 on socket 0 00:05:32.686 EAL: Detected lcore 64 as core 9 on socket 0 00:05:32.686 EAL: Detected lcore 65 as core 10 on socket 0 00:05:32.686 EAL: Detected lcore 66 as core 11 on socket 0 00:05:32.686 EAL: Detected lcore 67 as core 12 on socket 0 00:05:32.686 EAL: Detected lcore 68 as core 13 on socket 0 00:05:32.686 EAL: Detected lcore 69 as core 14 on socket 0 00:05:32.686 EAL: Detected lcore 70 as core 16 on socket 0 00:05:32.686 EAL: Detected lcore 71 as core 17 on socket 0 00:05:32.686 EAL: Detected lcore 72 as core 18 on socket 0 00:05:32.686 EAL: Detected lcore 73 as core 19 on socket 0 00:05:32.686 EAL: Detected lcore 74 as core 20 on socket 0 00:05:32.686 EAL: Detected lcore 75 as core 21 on socket 0 00:05:32.686 EAL: Detected lcore 76 as core 22 on socket 0 00:05:32.686 EAL: Detected lcore 77 as core 24 on socket 0 00:05:32.686 EAL: Detected lcore 78 as core 25 on socket 0 00:05:32.686 EAL: Detected lcore 79 as core 26 on socket 0 00:05:32.686 EAL: Detected lcore 80 as core 27 on socket 0 00:05:32.686 EAL: Detected lcore 81 as core 28 on socket 0 00:05:32.686 EAL: Detected lcore 82 as core 29 on socket 0 00:05:32.686 EAL: Detected lcore 83 as core 30 on socket 0 00:05:32.687 EAL: Detected lcore 84 as core 0 on socket 1 00:05:32.687 EAL: Detected lcore 85 as core 1 on socket 1 00:05:32.687 EAL: Detected lcore 86 as core 2 on socket 1 00:05:32.687 EAL: Detected lcore 87 as core 3 on socket 1 00:05:32.687 EAL: Detected lcore 88 as core 4 on socket 1 00:05:32.687 EAL: Detected lcore 89 as core 5 on socket 1 00:05:32.687 EAL: Detected lcore 90 as core 6 on socket 1 00:05:32.687 EAL: Detected lcore 91 as core 8 on socket 1 00:05:32.687 EAL: Detected lcore 92 as core 9 on socket 1 00:05:32.687 EAL: Detected lcore 93 as core 10 on socket 1 00:05:32.687 EAL: Detected lcore 94 as core 11 on socket 1 00:05:32.687 EAL: Detected lcore 95 as core 12 on socket 1 00:05:32.687 EAL: Detected lcore 96 as core 13 on socket 1 00:05:32.687 EAL: Detected lcore 97 as core 14 on socket 1 00:05:32.687 EAL: Detected lcore 98 as core 16 on socket 1 00:05:32.687 EAL: Detected lcore 99 as core 17 on socket 1 00:05:32.687 EAL: Detected lcore 100 as core 18 on socket 1 00:05:32.687 EAL: Detected lcore 101 as core 19 on socket 1 00:05:32.687 EAL: Detected lcore 102 as core 20 on socket 1 00:05:32.687 EAL: Detected lcore 103 as core 21 on socket 1 00:05:32.687 EAL: Detected lcore 104 as core 22 on socket 1 00:05:32.687 EAL: Detected lcore 105 as core 24 on socket 1 00:05:32.687 EAL: Detected lcore 106 as core 25 on socket 1 00:05:32.687 EAL: Detected lcore 107 as core 26 on socket 1 00:05:32.687 EAL: Detected lcore 108 as core 27 on socket 1 00:05:32.687 EAL: Detected lcore 109 as core 28 on socket 1 00:05:32.687 EAL: Detected lcore 110 as core 29 on socket 1 00:05:32.687 EAL: Detected lcore 111 as core 30 on socket 1 00:05:32.687 EAL: Maximum logical cores by configuration: 128 00:05:32.687 EAL: Detected CPU lcores: 112 00:05:32.687 EAL: Detected NUMA nodes: 2 00:05:32.687 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:32.687 EAL: Detected shared linkage of DPDK 00:05:32.687 EAL: No shared files mode enabled, IPC will be disabled 00:05:32.687 EAL: No shared files mode enabled, IPC is disabled 00:05:32.687 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:01.0 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:01.1 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:01.2 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:01.3 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:01.4 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:01.5 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:01.6 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:01.7 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:02.0 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:02.1 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:02.2 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:02.3 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:02.4 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:02.5 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:02.6 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b1:02.7 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:01.0 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:01.1 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:01.2 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:01.3 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:01.4 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:01.5 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:01.6 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:01.7 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:02.0 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:02.1 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:02.2 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:02.3 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:02.4 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:02.5 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:02.6 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b3:02.7 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:01.0 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:01.1 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:01.2 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:01.3 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:01.4 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:01.5 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:01.6 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:01.7 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:02.0 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:02.1 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:02.2 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:02.3 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:02.4 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:02.5 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:02.6 wants IOVA as 'PA' 00:05:32.687 EAL: PCI driver qat for device 0000:b5:02.7 wants IOVA as 'PA' 00:05:32.687 EAL: Bus pci wants IOVA as 'PA' 00:05:32.687 EAL: Bus auxiliary wants IOVA as 'DC' 00:05:32.687 EAL: Bus vdev wants IOVA as 'DC' 00:05:32.687 EAL: Selected IOVA mode 'PA' 00:05:32.687 EAL: Probing VFIO support... 00:05:32.687 EAL: IOMMU type 1 (Type 1) is supported 00:05:32.687 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:32.687 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:32.687 EAL: VFIO support initialized 00:05:32.687 EAL: Ask a virtual area of 0x2e000 bytes 00:05:32.687 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:32.687 EAL: Setting up physically contiguous memory... 00:05:32.687 EAL: Setting maximum number of open files to 524288 00:05:32.687 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:32.687 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:32.687 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:32.687 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.687 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:32.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.687 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.687 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:32.687 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:32.687 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.687 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:32.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.687 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.687 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:32.687 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:32.687 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.687 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:32.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.687 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.687 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:32.687 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:32.687 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.687 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:32.687 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.687 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.687 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:32.687 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:32.687 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:32.687 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.687 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:32.687 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:32.687 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.687 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:32.688 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:32.688 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.688 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:32.688 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:32.688 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.688 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:32.688 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:32.688 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.688 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:32.688 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:32.688 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.688 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:32.688 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:32.688 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.688 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:32.688 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:32.688 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.688 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:32.688 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:32.688 EAL: Hugepages will be freed exactly as allocated. 00:05:32.688 EAL: No shared files mode enabled, IPC is disabled 00:05:32.688 EAL: No shared files mode enabled, IPC is disabled 00:05:32.688 EAL: TSC frequency is ~2500000 KHz 00:05:32.688 EAL: Main lcore 0 is ready (tid=7faec68e6b00;cpuset=[0]) 00:05:32.688 EAL: Trying to obtain current memory policy. 00:05:32.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.688 EAL: Restoring previous memory policy: 0 00:05:32.688 EAL: request: mp_malloc_sync 00:05:32.688 EAL: No shared files mode enabled, IPC is disabled 00:05:32.688 EAL: Heap on socket 0 was expanded by 2MB 00:05:32.688 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001000000 00:05:32.688 EAL: PCI memory mapped at 0x202001001000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001002000 00:05:32.688 EAL: PCI memory mapped at 0x202001003000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001004000 00:05:32.688 EAL: PCI memory mapped at 0x202001005000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001006000 00:05:32.688 EAL: PCI memory mapped at 0x202001007000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001008000 00:05:32.688 EAL: PCI memory mapped at 0x202001009000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x20200100a000 00:05:32.688 EAL: PCI memory mapped at 0x20200100b000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x20200100c000 00:05:32.688 EAL: PCI memory mapped at 0x20200100d000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x20200100e000 00:05:32.688 EAL: PCI memory mapped at 0x20200100f000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001010000 00:05:32.688 EAL: PCI memory mapped at 0x202001011000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001012000 00:05:32.688 EAL: PCI memory mapped at 0x202001013000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001014000 00:05:32.688 EAL: PCI memory mapped at 0x202001015000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001016000 00:05:32.688 EAL: PCI memory mapped at 0x202001017000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001018000 00:05:32.688 EAL: PCI memory mapped at 0x202001019000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x20200101a000 00:05:32.688 EAL: PCI memory mapped at 0x20200101b000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x20200101c000 00:05:32.688 EAL: PCI memory mapped at 0x20200101d000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:32.688 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x20200101e000 00:05:32.688 EAL: PCI memory mapped at 0x20200101f000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:32.688 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001020000 00:05:32.688 EAL: PCI memory mapped at 0x202001021000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:32.688 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001022000 00:05:32.688 EAL: PCI memory mapped at 0x202001023000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:32.688 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001024000 00:05:32.688 EAL: PCI memory mapped at 0x202001025000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:32.688 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001026000 00:05:32.688 EAL: PCI memory mapped at 0x202001027000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:32.688 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001028000 00:05:32.688 EAL: PCI memory mapped at 0x202001029000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:32.688 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x20200102a000 00:05:32.688 EAL: PCI memory mapped at 0x20200102b000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:32.688 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x20200102c000 00:05:32.688 EAL: PCI memory mapped at 0x20200102d000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:32.688 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x20200102e000 00:05:32.688 EAL: PCI memory mapped at 0x20200102f000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:32.688 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001030000 00:05:32.688 EAL: PCI memory mapped at 0x202001031000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:32.688 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001032000 00:05:32.688 EAL: PCI memory mapped at 0x202001033000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:32.688 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001034000 00:05:32.688 EAL: PCI memory mapped at 0x202001035000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:32.688 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001036000 00:05:32.688 EAL: PCI memory mapped at 0x202001037000 00:05:32.688 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:32.688 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:05:32.688 EAL: probe driver: 8086:37c9 qat 00:05:32.688 EAL: PCI memory mapped at 0x202001038000 00:05:32.688 EAL: PCI memory mapped at 0x202001039000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:32.689 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x20200103a000 00:05:32.689 EAL: PCI memory mapped at 0x20200103b000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:32.689 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x20200103c000 00:05:32.689 EAL: PCI memory mapped at 0x20200103d000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:32.689 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x20200103e000 00:05:32.689 EAL: PCI memory mapped at 0x20200103f000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:32.689 EAL: PCI device 0000:b1:01.0 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001040000 00:05:32.689 EAL: PCI memory mapped at 0x202001041000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.0 (socket 1) 00:05:32.689 EAL: Trying to obtain current memory policy. 00:05:32.689 EAL: Setting policy MPOL_PREFERRED for socket 1 00:05:32.689 EAL: Restoring previous memory policy: 4 00:05:32.689 EAL: request: mp_malloc_sync 00:05:32.689 EAL: No shared files mode enabled, IPC is disabled 00:05:32.689 EAL: Heap on socket 1 was expanded by 2MB 00:05:32.689 EAL: PCI device 0000:b1:01.1 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001042000 00:05:32.689 EAL: PCI memory mapped at 0x202001043000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.1 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:01.2 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001044000 00:05:32.689 EAL: PCI memory mapped at 0x202001045000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.2 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:01.3 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001046000 00:05:32.689 EAL: PCI memory mapped at 0x202001047000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.3 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:01.4 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001048000 00:05:32.689 EAL: PCI memory mapped at 0x202001049000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.4 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:01.5 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x20200104a000 00:05:32.689 EAL: PCI memory mapped at 0x20200104b000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.5 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:01.6 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x20200104c000 00:05:32.689 EAL: PCI memory mapped at 0x20200104d000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.6 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:01.7 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x20200104e000 00:05:32.689 EAL: PCI memory mapped at 0x20200104f000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.7 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:02.0 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001050000 00:05:32.689 EAL: PCI memory mapped at 0x202001051000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.0 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:02.1 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001052000 00:05:32.689 EAL: PCI memory mapped at 0x202001053000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.1 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:02.2 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001054000 00:05:32.689 EAL: PCI memory mapped at 0x202001055000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.2 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:02.3 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001056000 00:05:32.689 EAL: PCI memory mapped at 0x202001057000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.3 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:02.4 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001058000 00:05:32.689 EAL: PCI memory mapped at 0x202001059000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.4 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:02.5 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x20200105a000 00:05:32.689 EAL: PCI memory mapped at 0x20200105b000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.5 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:02.6 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x20200105c000 00:05:32.689 EAL: PCI memory mapped at 0x20200105d000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.6 (socket 1) 00:05:32.689 EAL: PCI device 0000:b1:02.7 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x20200105e000 00:05:32.689 EAL: PCI memory mapped at 0x20200105f000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.7 (socket 1) 00:05:32.689 EAL: PCI device 0000:b3:01.0 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001060000 00:05:32.689 EAL: PCI memory mapped at 0x202001061000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.0 (socket 1) 00:05:32.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.689 EAL: PCI memory unmapped at 0x202001060000 00:05:32.689 EAL: PCI memory unmapped at 0x202001061000 00:05:32.689 EAL: Requested device 0000:b3:01.0 cannot be used 00:05:32.689 EAL: PCI device 0000:b3:01.1 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001062000 00:05:32.689 EAL: PCI memory mapped at 0x202001063000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.1 (socket 1) 00:05:32.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.689 EAL: PCI memory unmapped at 0x202001062000 00:05:32.689 EAL: PCI memory unmapped at 0x202001063000 00:05:32.689 EAL: Requested device 0000:b3:01.1 cannot be used 00:05:32.689 EAL: PCI device 0000:b3:01.2 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001064000 00:05:32.689 EAL: PCI memory mapped at 0x202001065000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.2 (socket 1) 00:05:32.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.689 EAL: PCI memory unmapped at 0x202001064000 00:05:32.689 EAL: PCI memory unmapped at 0x202001065000 00:05:32.689 EAL: Requested device 0000:b3:01.2 cannot be used 00:05:32.689 EAL: PCI device 0000:b3:01.3 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001066000 00:05:32.689 EAL: PCI memory mapped at 0x202001067000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.3 (socket 1) 00:05:32.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.689 EAL: PCI memory unmapped at 0x202001066000 00:05:32.689 EAL: PCI memory unmapped at 0x202001067000 00:05:32.689 EAL: Requested device 0000:b3:01.3 cannot be used 00:05:32.689 EAL: PCI device 0000:b3:01.4 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001068000 00:05:32.689 EAL: PCI memory mapped at 0x202001069000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.4 (socket 1) 00:05:32.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.689 EAL: PCI memory unmapped at 0x202001068000 00:05:32.689 EAL: PCI memory unmapped at 0x202001069000 00:05:32.689 EAL: Requested device 0000:b3:01.4 cannot be used 00:05:32.689 EAL: PCI device 0000:b3:01.5 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x20200106a000 00:05:32.689 EAL: PCI memory mapped at 0x20200106b000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.5 (socket 1) 00:05:32.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.689 EAL: PCI memory unmapped at 0x20200106a000 00:05:32.689 EAL: PCI memory unmapped at 0x20200106b000 00:05:32.689 EAL: Requested device 0000:b3:01.5 cannot be used 00:05:32.689 EAL: PCI device 0000:b3:01.6 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x20200106c000 00:05:32.689 EAL: PCI memory mapped at 0x20200106d000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.6 (socket 1) 00:05:32.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.689 EAL: PCI memory unmapped at 0x20200106c000 00:05:32.689 EAL: PCI memory unmapped at 0x20200106d000 00:05:32.689 EAL: Requested device 0000:b3:01.6 cannot be used 00:05:32.689 EAL: PCI device 0000:b3:01.7 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x20200106e000 00:05:32.689 EAL: PCI memory mapped at 0x20200106f000 00:05:32.689 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.7 (socket 1) 00:05:32.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.689 EAL: PCI memory unmapped at 0x20200106e000 00:05:32.689 EAL: PCI memory unmapped at 0x20200106f000 00:05:32.689 EAL: Requested device 0000:b3:01.7 cannot be used 00:05:32.689 EAL: PCI device 0000:b3:02.0 on NUMA socket 1 00:05:32.689 EAL: probe driver: 8086:37c9 qat 00:05:32.689 EAL: PCI memory mapped at 0x202001070000 00:05:32.689 EAL: PCI memory mapped at 0x202001071000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.0 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x202001070000 00:05:32.690 EAL: PCI memory unmapped at 0x202001071000 00:05:32.690 EAL: Requested device 0000:b3:02.0 cannot be used 00:05:32.690 EAL: PCI device 0000:b3:02.1 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x202001072000 00:05:32.690 EAL: PCI memory mapped at 0x202001073000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.1 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x202001072000 00:05:32.690 EAL: PCI memory unmapped at 0x202001073000 00:05:32.690 EAL: Requested device 0000:b3:02.1 cannot be used 00:05:32.690 EAL: PCI device 0000:b3:02.2 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x202001074000 00:05:32.690 EAL: PCI memory mapped at 0x202001075000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.2 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x202001074000 00:05:32.690 EAL: PCI memory unmapped at 0x202001075000 00:05:32.690 EAL: Requested device 0000:b3:02.2 cannot be used 00:05:32.690 EAL: PCI device 0000:b3:02.3 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x202001076000 00:05:32.690 EAL: PCI memory mapped at 0x202001077000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.3 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x202001076000 00:05:32.690 EAL: PCI memory unmapped at 0x202001077000 00:05:32.690 EAL: Requested device 0000:b3:02.3 cannot be used 00:05:32.690 EAL: PCI device 0000:b3:02.4 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x202001078000 00:05:32.690 EAL: PCI memory mapped at 0x202001079000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.4 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x202001078000 00:05:32.690 EAL: PCI memory unmapped at 0x202001079000 00:05:32.690 EAL: Requested device 0000:b3:02.4 cannot be used 00:05:32.690 EAL: PCI device 0000:b3:02.5 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x20200107a000 00:05:32.690 EAL: PCI memory mapped at 0x20200107b000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.5 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x20200107a000 00:05:32.690 EAL: PCI memory unmapped at 0x20200107b000 00:05:32.690 EAL: Requested device 0000:b3:02.5 cannot be used 00:05:32.690 EAL: PCI device 0000:b3:02.6 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x20200107c000 00:05:32.690 EAL: PCI memory mapped at 0x20200107d000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.6 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x20200107c000 00:05:32.690 EAL: PCI memory unmapped at 0x20200107d000 00:05:32.690 EAL: Requested device 0000:b3:02.6 cannot be used 00:05:32.690 EAL: PCI device 0000:b3:02.7 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x20200107e000 00:05:32.690 EAL: PCI memory mapped at 0x20200107f000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.7 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x20200107e000 00:05:32.690 EAL: PCI memory unmapped at 0x20200107f000 00:05:32.690 EAL: Requested device 0000:b3:02.7 cannot be used 00:05:32.690 EAL: PCI device 0000:b5:01.0 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x202001080000 00:05:32.690 EAL: PCI memory mapped at 0x202001081000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.0 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x202001080000 00:05:32.690 EAL: PCI memory unmapped at 0x202001081000 00:05:32.690 EAL: Requested device 0000:b5:01.0 cannot be used 00:05:32.690 EAL: PCI device 0000:b5:01.1 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x202001082000 00:05:32.690 EAL: PCI memory mapped at 0x202001083000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.1 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x202001082000 00:05:32.690 EAL: PCI memory unmapped at 0x202001083000 00:05:32.690 EAL: Requested device 0000:b5:01.1 cannot be used 00:05:32.690 EAL: PCI device 0000:b5:01.2 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x202001084000 00:05:32.690 EAL: PCI memory mapped at 0x202001085000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.2 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x202001084000 00:05:32.690 EAL: PCI memory unmapped at 0x202001085000 00:05:32.690 EAL: Requested device 0000:b5:01.2 cannot be used 00:05:32.690 EAL: PCI device 0000:b5:01.3 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x202001086000 00:05:32.690 EAL: PCI memory mapped at 0x202001087000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.3 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x202001086000 00:05:32.690 EAL: PCI memory unmapped at 0x202001087000 00:05:32.690 EAL: Requested device 0000:b5:01.3 cannot be used 00:05:32.690 EAL: PCI device 0000:b5:01.4 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x202001088000 00:05:32.690 EAL: PCI memory mapped at 0x202001089000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.4 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x202001088000 00:05:32.690 EAL: PCI memory unmapped at 0x202001089000 00:05:32.690 EAL: Requested device 0000:b5:01.4 cannot be used 00:05:32.690 EAL: PCI device 0000:b5:01.5 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x20200108a000 00:05:32.690 EAL: PCI memory mapped at 0x20200108b000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.5 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x20200108a000 00:05:32.690 EAL: PCI memory unmapped at 0x20200108b000 00:05:32.690 EAL: Requested device 0000:b5:01.5 cannot be used 00:05:32.690 EAL: PCI device 0000:b5:01.6 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x20200108c000 00:05:32.690 EAL: PCI memory mapped at 0x20200108d000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.6 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x20200108c000 00:05:32.690 EAL: PCI memory unmapped at 0x20200108d000 00:05:32.690 EAL: Requested device 0000:b5:01.6 cannot be used 00:05:32.690 EAL: PCI device 0000:b5:01.7 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x20200108e000 00:05:32.690 EAL: PCI memory mapped at 0x20200108f000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.7 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x20200108e000 00:05:32.690 EAL: PCI memory unmapped at 0x20200108f000 00:05:32.690 EAL: Requested device 0000:b5:01.7 cannot be used 00:05:32.690 EAL: PCI device 0000:b5:02.0 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x202001090000 00:05:32.690 EAL: PCI memory mapped at 0x202001091000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.0 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x202001090000 00:05:32.690 EAL: PCI memory unmapped at 0x202001091000 00:05:32.690 EAL: Requested device 0000:b5:02.0 cannot be used 00:05:32.690 EAL: PCI device 0000:b5:02.1 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x202001092000 00:05:32.690 EAL: PCI memory mapped at 0x202001093000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.1 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x202001092000 00:05:32.690 EAL: PCI memory unmapped at 0x202001093000 00:05:32.690 EAL: Requested device 0000:b5:02.1 cannot be used 00:05:32.690 EAL: PCI device 0000:b5:02.2 on NUMA socket 1 00:05:32.690 EAL: probe driver: 8086:37c9 qat 00:05:32.690 EAL: PCI memory mapped at 0x202001094000 00:05:32.690 EAL: PCI memory mapped at 0x202001095000 00:05:32.690 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.2 (socket 1) 00:05:32.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.690 EAL: PCI memory unmapped at 0x202001094000 00:05:32.690 EAL: PCI memory unmapped at 0x202001095000 00:05:32.691 EAL: Requested device 0000:b5:02.2 cannot be used 00:05:32.691 EAL: PCI device 0000:b5:02.3 on NUMA socket 1 00:05:32.691 EAL: probe driver: 8086:37c9 qat 00:05:32.691 EAL: PCI memory mapped at 0x202001096000 00:05:32.691 EAL: PCI memory mapped at 0x202001097000 00:05:32.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.3 (socket 1) 00:05:32.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.691 EAL: PCI memory unmapped at 0x202001096000 00:05:32.691 EAL: PCI memory unmapped at 0x202001097000 00:05:32.691 EAL: Requested device 0000:b5:02.3 cannot be used 00:05:32.691 EAL: PCI device 0000:b5:02.4 on NUMA socket 1 00:05:32.691 EAL: probe driver: 8086:37c9 qat 00:05:32.691 EAL: PCI memory mapped at 0x202001098000 00:05:32.691 EAL: PCI memory mapped at 0x202001099000 00:05:32.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.4 (socket 1) 00:05:32.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.691 EAL: PCI memory unmapped at 0x202001098000 00:05:32.691 EAL: PCI memory unmapped at 0x202001099000 00:05:32.691 EAL: Requested device 0000:b5:02.4 cannot be used 00:05:32.691 EAL: PCI device 0000:b5:02.5 on NUMA socket 1 00:05:32.691 EAL: probe driver: 8086:37c9 qat 00:05:32.691 EAL: PCI memory mapped at 0x20200109a000 00:05:32.691 EAL: PCI memory mapped at 0x20200109b000 00:05:32.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.5 (socket 1) 00:05:32.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.691 EAL: PCI memory unmapped at 0x20200109a000 00:05:32.691 EAL: PCI memory unmapped at 0x20200109b000 00:05:32.691 EAL: Requested device 0000:b5:02.5 cannot be used 00:05:32.691 EAL: PCI device 0000:b5:02.6 on NUMA socket 1 00:05:32.691 EAL: probe driver: 8086:37c9 qat 00:05:32.691 EAL: PCI memory mapped at 0x20200109c000 00:05:32.691 EAL: PCI memory mapped at 0x20200109d000 00:05:32.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.6 (socket 1) 00:05:32.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.691 EAL: PCI memory unmapped at 0x20200109c000 00:05:32.691 EAL: PCI memory unmapped at 0x20200109d000 00:05:32.691 EAL: Requested device 0000:b5:02.6 cannot be used 00:05:32.691 EAL: PCI device 0000:b5:02.7 on NUMA socket 1 00:05:32.691 EAL: probe driver: 8086:37c9 qat 00:05:32.691 EAL: PCI memory mapped at 0x20200109e000 00:05:32.691 EAL: PCI memory mapped at 0x20200109f000 00:05:32.691 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.7 (socket 1) 00:05:32.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:32.691 EAL: PCI memory unmapped at 0x20200109e000 00:05:32.691 EAL: PCI memory unmapped at 0x20200109f000 00:05:32.691 EAL: Requested device 0000:b5:02.7 cannot be used 00:05:32.691 EAL: No shared files mode enabled, IPC is disabled 00:05:32.691 EAL: No shared files mode enabled, IPC is disabled 00:05:32.691 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:32.691 EAL: Mem event callback 'spdk:(nil)' registered 00:05:32.691 00:05:32.691 00:05:32.691 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.691 http://cunit.sourceforge.net/ 00:05:32.691 00:05:32.691 00:05:32.691 Suite: components_suite 00:05:32.691 Test: vtophys_malloc_test ...passed 00:05:32.691 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:32.691 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.691 EAL: Restoring previous memory policy: 4 00:05:32.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.691 EAL: request: mp_malloc_sync 00:05:32.691 EAL: No shared files mode enabled, IPC is disabled 00:05:32.691 EAL: Heap on socket 0 was expanded by 4MB 00:05:32.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.691 EAL: request: mp_malloc_sync 00:05:32.691 EAL: No shared files mode enabled, IPC is disabled 00:05:32.691 EAL: Heap on socket 0 was shrunk by 4MB 00:05:32.691 EAL: Trying to obtain current memory policy. 00:05:32.691 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.691 EAL: Restoring previous memory policy: 4 00:05:32.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.691 EAL: request: mp_malloc_sync 00:05:32.691 EAL: No shared files mode enabled, IPC is disabled 00:05:32.691 EAL: Heap on socket 0 was expanded by 6MB 00:05:32.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.691 EAL: request: mp_malloc_sync 00:05:32.691 EAL: No shared files mode enabled, IPC is disabled 00:05:32.691 EAL: Heap on socket 0 was shrunk by 6MB 00:05:32.691 EAL: Trying to obtain current memory policy. 00:05:32.691 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.691 EAL: Restoring previous memory policy: 4 00:05:32.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.691 EAL: request: mp_malloc_sync 00:05:32.691 EAL: No shared files mode enabled, IPC is disabled 00:05:32.691 EAL: Heap on socket 0 was expanded by 10MB 00:05:32.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.691 EAL: request: mp_malloc_sync 00:05:32.691 EAL: No shared files mode enabled, IPC is disabled 00:05:32.691 EAL: Heap on socket 0 was shrunk by 10MB 00:05:32.691 EAL: Trying to obtain current memory policy. 00:05:32.691 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.691 EAL: Restoring previous memory policy: 4 00:05:32.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.691 EAL: request: mp_malloc_sync 00:05:32.691 EAL: No shared files mode enabled, IPC is disabled 00:05:32.691 EAL: Heap on socket 0 was expanded by 18MB 00:05:32.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.691 EAL: request: mp_malloc_sync 00:05:32.691 EAL: No shared files mode enabled, IPC is disabled 00:05:32.691 EAL: Heap on socket 0 was shrunk by 18MB 00:05:32.691 EAL: Trying to obtain current memory policy. 00:05:32.691 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.691 EAL: Restoring previous memory policy: 4 00:05:32.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.691 EAL: request: mp_malloc_sync 00:05:32.691 EAL: No shared files mode enabled, IPC is disabled 00:05:32.691 EAL: Heap on socket 0 was expanded by 34MB 00:05:32.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.691 EAL: request: mp_malloc_sync 00:05:32.691 EAL: No shared files mode enabled, IPC is disabled 00:05:32.691 EAL: Heap on socket 0 was shrunk by 34MB 00:05:32.691 EAL: Trying to obtain current memory policy. 00:05:32.691 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.951 EAL: Restoring previous memory policy: 4 00:05:32.951 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.951 EAL: request: mp_malloc_sync 00:05:32.951 EAL: No shared files mode enabled, IPC is disabled 00:05:32.951 EAL: Heap on socket 0 was expanded by 66MB 00:05:32.951 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.951 EAL: request: mp_malloc_sync 00:05:32.951 EAL: No shared files mode enabled, IPC is disabled 00:05:33.210 EAL: Heap on socket 0 was shrunk by 66MB 00:05:33.210 EAL: Trying to obtain current memory policy. 00:05:33.210 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.471 EAL: Restoring previous memory policy: 4 00:05:33.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.471 EAL: request: mp_malloc_sync 00:05:33.471 EAL: No shared files mode enabled, IPC is disabled 00:05:33.471 EAL: Heap on socket 0 was expanded by 130MB 00:05:33.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.471 EAL: request: mp_malloc_sync 00:05:33.471 EAL: No shared files mode enabled, IPC is disabled 00:05:33.471 EAL: Heap on socket 0 was shrunk by 130MB 00:05:33.471 EAL: Trying to obtain current memory policy. 00:05:33.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.471 EAL: Restoring previous memory policy: 4 00:05:33.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.471 EAL: request: mp_malloc_sync 00:05:33.471 EAL: No shared files mode enabled, IPC is disabled 00:05:33.471 EAL: Heap on socket 0 was expanded by 258MB 00:05:33.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.471 EAL: request: mp_malloc_sync 00:05:33.471 EAL: No shared files mode enabled, IPC is disabled 00:05:33.471 EAL: Heap on socket 0 was shrunk by 258MB 00:05:33.471 EAL: Trying to obtain current memory policy. 00:05:33.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.792 EAL: Restoring previous memory policy: 4 00:05:33.792 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.792 EAL: request: mp_malloc_sync 00:05:33.792 EAL: No shared files mode enabled, IPC is disabled 00:05:33.792 EAL: Heap on socket 0 was expanded by 514MB 00:05:33.792 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.792 EAL: request: mp_malloc_sync 00:05:33.792 EAL: No shared files mode enabled, IPC is disabled 00:05:33.792 EAL: Heap on socket 0 was shrunk by 514MB 00:05:33.792 EAL: Trying to obtain current memory policy. 00:05:33.792 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.051 EAL: Restoring previous memory policy: 4 00:05:34.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.051 EAL: request: mp_malloc_sync 00:05:34.051 EAL: No shared files mode enabled, IPC is disabled 00:05:34.051 EAL: Heap on socket 0 was expanded by 1026MB 00:05:34.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.310 EAL: request: mp_malloc_sync 00:05:34.310 EAL: No shared files mode enabled, IPC is disabled 00:05:34.310 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:34.310 passed 00:05:34.310 00:05:34.310 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.310 suites 1 1 n/a 0 0 00:05:34.310 tests 2 2 2 0 0 00:05:34.310 asserts 6611 6611 6611 0 n/a 00:05:34.310 00:05:34.310 Elapsed time = 0.971 seconds 00:05:34.310 EAL: No shared files mode enabled, IPC is disabled 00:05:34.310 EAL: No shared files mode enabled, IPC is disabled 00:05:34.310 EAL: No shared files mode enabled, IPC is disabled 00:05:34.310 00:05:34.310 real 0m1.617s 00:05:34.310 user 0m0.643s 00:05:34.310 sys 0m0.459s 00:05:34.310 18:08:42 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.310 18:08:42 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:34.310 ************************************ 00:05:34.310 END TEST env_vtophys 00:05:34.310 ************************************ 00:05:34.310 18:08:42 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:34.310 18:08:42 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.310 18:08:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.310 18:08:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.310 ************************************ 00:05:34.310 START TEST env_pci 00:05:34.310 ************************************ 00:05:34.310 18:08:42 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:34.310 00:05:34.310 00:05:34.310 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.310 http://cunit.sourceforge.net/ 00:05:34.310 00:05:34.310 00:05:34.310 Suite: pci 00:05:34.311 Test: pci_hook ...[2024-07-24 18:08:42.853478] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2101321 has claimed it 00:05:34.311 EAL: Cannot find device (10000:00:01.0) 00:05:34.311 EAL: Failed to attach device on primary process 00:05:34.311 passed 00:05:34.311 00:05:34.311 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.311 suites 1 1 n/a 0 0 00:05:34.311 tests 1 1 1 0 0 00:05:34.311 asserts 25 25 25 0 n/a 00:05:34.311 00:05:34.311 Elapsed time = 0.039 seconds 00:05:34.311 00:05:34.311 real 0m0.068s 00:05:34.311 user 0m0.020s 00:05:34.311 sys 0m0.047s 00:05:34.311 18:08:42 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.311 18:08:42 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:34.311 ************************************ 00:05:34.311 END TEST env_pci 00:05:34.311 ************************************ 00:05:34.572 18:08:42 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:34.572 18:08:42 env -- env/env.sh@15 -- # uname 00:05:34.572 18:08:42 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:34.572 18:08:42 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:34.572 18:08:42 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:34.572 18:08:42 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:34.572 18:08:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.572 18:08:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.572 ************************************ 00:05:34.572 START TEST env_dpdk_post_init 00:05:34.572 ************************************ 00:05:34.572 18:08:42 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:34.572 EAL: Detected CPU lcores: 112 00:05:34.572 EAL: Detected NUMA nodes: 2 00:05:34.572 EAL: Detected shared linkage of DPDK 00:05:34.572 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:34.572 EAL: Selected IOVA mode 'PA' 00:05:34.572 EAL: VFIO support initialized 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_asym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.572 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_sym 00:05:34.572 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.572 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.0 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.0_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.0_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.1 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.1_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.1_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.2 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.2_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.2_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.3 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.3_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.3_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.4 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.4_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.4_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.5 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.5_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.5_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.6 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.6_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.6_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.7 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.7_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:01.7_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.0 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.0_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.0_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.1 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.1_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.1_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.2 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.2_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.2_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.3 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.3_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.3_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.4 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.4_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.4_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.5 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.5_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.5_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.6 (socket 1) 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.6_qat_asym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.573 CRYPTODEV: Creating cryptodev 0000:b1:02.6_qat_sym 00:05:34.573 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.573 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.7 (socket 1) 00:05:34.574 CRYPTODEV: Creating cryptodev 0000:b1:02.7_qat_asym 00:05:34.574 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:34.574 CRYPTODEV: Creating cryptodev 0000:b1:02.7_qat_sym 00:05:34.574 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.0 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:01.0 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.1 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:01.1 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.2 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:01.2 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.3 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:01.3 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.4 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:01.4 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.5 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:01.5 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.6 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:01.6 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.7 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:01.7 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.0 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:02.0 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.1 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:02.1 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.2 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:02.2 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.3 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:02.3 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.4 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:02.4 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.5 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:02.5 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.6 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:02.6 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.7 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b3:02.7 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.0 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:01.0 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.1 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:01.1 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.2 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:01.2 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.3 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:01.3 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.4 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:01.4 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.5 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:01.5 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.6 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:01.6 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.7 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:01.7 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.0 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:02.0 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.1 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:02.1 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.2 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:02.2 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.3 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:02.3 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.4 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:02.4 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.5 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:02.5 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.6 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:02.6 cannot be used 00:05:34.574 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.7 (socket 1) 00:05:34.574 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.574 EAL: Requested device 0000:b5:02.7 cannot be used 00:05:34.574 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.574 EAL: Using IOMMU type 1 (Type 1) 00:05:34.574 EAL: Ignore mapping IO port bar(1) 00:05:34.574 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:34.835 EAL: Ignore mapping IO port bar(1) 00:05:34.835 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.0 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:01.0 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.1 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:01.1 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.2 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:01.2 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.3 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:01.3 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.4 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:01.4 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.5 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:01.5 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.6 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:01.6 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.7 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:01.7 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.0 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:02.0 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.1 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:02.1 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.2 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:02.2 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.3 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:02.3 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.4 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:02.4 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.5 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:02.5 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.6 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:02.6 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.7 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b3:02.7 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.0 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b5:01.0 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.1 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.835 EAL: Requested device 0000:b5:01.1 cannot be used 00:05:34.835 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.2 (socket 1) 00:05:34.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:01.2 cannot be used 00:05:34.836 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.3 (socket 1) 00:05:34.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:01.3 cannot be used 00:05:34.836 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.4 (socket 1) 00:05:34.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:01.4 cannot be used 00:05:34.836 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.5 (socket 1) 00:05:34.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:01.5 cannot be used 00:05:34.836 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.6 (socket 1) 00:05:34.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:01.6 cannot be used 00:05:34.836 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.7 (socket 1) 00:05:34.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:01.7 cannot be used 00:05:34.836 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.0 (socket 1) 00:05:34.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:02.0 cannot be used 00:05:34.836 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.1 (socket 1) 00:05:34.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:02.1 cannot be used 00:05:34.836 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.2 (socket 1) 00:05:34.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:02.2 cannot be used 00:05:34.836 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.3 (socket 1) 00:05:34.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:02.3 cannot be used 00:05:34.836 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.4 (socket 1) 00:05:34.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:02.4 cannot be used 00:05:34.836 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.5 (socket 1) 00:05:34.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:02.5 cannot be used 00:05:34.836 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.6 (socket 1) 00:05:34.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:02.6 cannot be used 00:05:34.836 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.7 (socket 1) 00:05:34.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:34.836 EAL: Requested device 0000:b5:02.7 cannot be used 00:05:35.773 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:39.969 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:39.969 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001120000 00:05:39.969 Starting DPDK initialization... 00:05:39.969 Starting SPDK post initialization... 00:05:39.969 SPDK NVMe probe 00:05:39.969 Attaching to 0000:d8:00.0 00:05:39.969 Attached to 0000:d8:00.0 00:05:39.969 Cleaning up... 00:05:39.969 00:05:39.969 real 0m5.390s 00:05:39.969 user 0m4.018s 00:05:39.969 sys 0m0.428s 00:05:39.969 18:08:48 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.969 18:08:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.969 ************************************ 00:05:39.969 END TEST env_dpdk_post_init 00:05:39.969 ************************************ 00:05:39.969 18:08:48 env -- env/env.sh@26 -- # uname 00:05:39.969 18:08:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:39.969 18:08:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.969 18:08:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.969 18:08:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.969 18:08:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.969 ************************************ 00:05:39.969 START TEST env_mem_callbacks 00:05:39.969 ************************************ 00:05:39.969 18:08:48 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.969 EAL: Detected CPU lcores: 112 00:05:39.969 EAL: Detected NUMA nodes: 2 00:05:39.969 EAL: Detected shared linkage of DPDK 00:05:39.969 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.969 EAL: Selected IOVA mode 'PA' 00:05:39.969 EAL: VFIO support initialized 00:05:39.969 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_asym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_sym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.969 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_asym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_sym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.969 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_asym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_sym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.969 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_asym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_sym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.969 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_asym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_sym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.969 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_asym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_sym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.969 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_asym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_sym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.969 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_asym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_sym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.969 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_asym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_sym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.969 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_asym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_sym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.969 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_asym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_sym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.969 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_asym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_sym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.969 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_asym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.969 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_sym 00:05:39.969 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.0 (socket 1) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:b1:01.0_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:b1:01.0_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.1 (socket 1) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:b1:01.1_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:b1:01.1_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.2 (socket 1) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:b1:01.2_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:b1:01.2_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.3 (socket 1) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:b1:01.3_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:b1:01.3_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.4 (socket 1) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:b1:01.4_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:b1:01.4_qat_sym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.970 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.5 (socket 1) 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:b1:01.5_qat_asym 00:05:39.970 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.970 CRYPTODEV: Creating cryptodev 0000:b1:01.5_qat_sym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.6 (socket 1) 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:01.6_qat_asym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:01.6_qat_sym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:01.7 (socket 1) 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:01.7_qat_asym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:01.7_qat_sym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.0 (socket 1) 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.0_qat_asym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.0_qat_sym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.1 (socket 1) 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.1_qat_asym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.1_qat_sym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.2 (socket 1) 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.2_qat_asym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.2_qat_sym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.3 (socket 1) 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.3_qat_asym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.3_qat_sym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.4 (socket 1) 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.4_qat_asym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.4_qat_sym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.5 (socket 1) 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.5_qat_asym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.5_qat_sym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.6 (socket 1) 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.6_qat_asym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.6_qat_sym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b1:02.7 (socket 1) 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.7_qat_asym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:39.971 CRYPTODEV: Creating cryptodev 0000:b1:02.7_qat_sym 00:05:39.971 CRYPTODEV: Initialisation parameters - name: 0000:b1:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.0 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:01.0 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.1 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:01.1 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.2 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:01.2 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.3 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:01.3 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.4 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:01.4 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.5 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:01.5 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.6 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:01.6 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:01.7 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:01.7 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.0 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:02.0 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.1 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:02.1 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.2 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:02.2 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.3 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:02.3 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.4 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:02.4 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.5 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:02.5 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.6 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:02.6 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b3:02.7 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b3:02.7 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.0 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:01.0 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.1 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:01.1 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.2 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:01.2 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.3 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:01.3 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.4 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:01.4 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.5 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:01.5 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.6 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:01.6 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:01.7 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:01.7 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.0 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:02.0 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.1 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:02.1 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.2 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:02.2 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.3 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:02.3 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.4 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:02.4 cannot be used 00:05:39.971 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.5 (socket 1) 00:05:39.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.971 EAL: Requested device 0000:b5:02.5 cannot be used 00:05:39.972 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.6 (socket 1) 00:05:39.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.972 EAL: Requested device 0000:b5:02.6 cannot be used 00:05:39.972 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:b5:02.7 (socket 1) 00:05:39.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:39.972 EAL: Requested device 0000:b5:02.7 cannot be used 00:05:39.972 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.972 00:05:39.972 00:05:39.972 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.972 http://cunit.sourceforge.net/ 00:05:39.972 00:05:39.972 00:05:39.972 Suite: memory 00:05:39.972 Test: test ... 00:05:39.972 register 0x200000200000 2097152 00:05:39.972 register 0x201000a00000 2097152 00:05:39.972 malloc 3145728 00:05:39.972 register 0x200000400000 4194304 00:05:39.972 buf 0x200000500000 len 3145728 PASSED 00:05:39.972 malloc 64 00:05:39.972 buf 0x2000004fff40 len 64 PASSED 00:05:39.972 malloc 4194304 00:05:39.972 register 0x200000800000 6291456 00:05:39.972 buf 0x200000a00000 len 4194304 PASSED 00:05:39.972 free 0x200000500000 3145728 00:05:39.972 free 0x2000004fff40 64 00:05:39.972 unregister 0x200000400000 4194304 PASSED 00:05:39.972 free 0x200000a00000 4194304 00:05:39.972 unregister 0x200000800000 6291456 PASSED 00:05:39.972 malloc 8388608 00:05:39.972 register 0x200000400000 10485760 00:05:39.972 buf 0x200000600000 len 8388608 PASSED 00:05:39.972 free 0x200000600000 8388608 00:05:39.972 unregister 0x200000400000 10485760 PASSED 00:05:39.972 passed 00:05:39.972 00:05:39.972 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.972 suites 1 1 n/a 0 0 00:05:39.972 tests 1 1 1 0 0 00:05:39.972 asserts 16 16 16 0 n/a 00:05:39.972 00:05:39.972 Elapsed time = 0.005 seconds 00:05:39.972 00:05:39.972 real 0m0.088s 00:05:39.972 user 0m0.027s 00:05:39.972 sys 0m0.060s 00:05:39.972 18:08:48 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.972 18:08:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:39.972 ************************************ 00:05:39.972 END TEST env_mem_callbacks 00:05:39.972 ************************************ 00:05:40.231 00:05:40.231 real 0m7.829s 00:05:40.231 user 0m5.046s 00:05:40.231 sys 0m1.366s 00:05:40.231 18:08:48 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.231 18:08:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.231 ************************************ 00:05:40.231 END TEST env 00:05:40.231 ************************************ 00:05:40.231 18:08:48 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:40.231 18:08:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.231 18:08:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.231 18:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:40.231 ************************************ 00:05:40.231 START TEST rpc 00:05:40.231 ************************************ 00:05:40.232 18:08:48 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:40.232 * Looking for test storage... 00:05:40.232 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:40.232 18:08:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2102498 00:05:40.232 18:08:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.232 18:08:48 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:40.232 18:08:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2102498 00:05:40.232 18:08:48 rpc -- common/autotest_common.sh@831 -- # '[' -z 2102498 ']' 00:05:40.232 18:08:48 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.232 18:08:48 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.232 18:08:48 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.232 18:08:48 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.232 18:08:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.491 [2024-07-24 18:08:48.848448] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:05:40.491 [2024-07-24 18:08:48.848501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2102498 ] 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.491 EAL: Requested device 0000:b3:01.0 cannot be used 00:05:40.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:01.1 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:01.2 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:01.3 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:01.4 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:01.5 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:01.6 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:01.7 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:02.0 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:02.1 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:02.2 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:02.3 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:02.4 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:02.5 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:02.6 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b3:02.7 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:01.0 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:01.1 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:01.2 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:01.3 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:01.4 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:01.5 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:01.6 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:01.7 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:02.0 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:02.1 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:02.2 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:02.3 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:02.4 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:02.5 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:02.6 cannot be used 00:05:40.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:40.492 EAL: Requested device 0000:b5:02.7 cannot be used 00:05:40.492 [2024-07-24 18:08:48.943109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.492 [2024-07-24 18:08:49.014973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:40.492 [2024-07-24 18:08:49.015026] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2102498' to capture a snapshot of events at runtime. 00:05:40.492 [2024-07-24 18:08:49.015036] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:40.492 [2024-07-24 18:08:49.015044] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:40.492 [2024-07-24 18:08:49.015050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2102498 for offline analysis/debug. 00:05:40.492 [2024-07-24 18:08:49.015072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.061 18:08:49 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.061 18:08:49 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:41.061 18:08:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:41.061 18:08:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:41.061 18:08:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.061 18:08:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.061 18:08:49 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.061 18:08:49 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.061 18:08:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 ************************************ 00:05:41.323 START TEST rpc_integrity 00:05:41.323 ************************************ 00:05:41.323 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:41.323 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.323 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.323 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.323 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.323 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.323 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.323 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.323 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.323 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.323 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:41.323 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.323 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.323 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.323 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.323 { 00:05:41.323 "name": "Malloc0", 00:05:41.323 "aliases": [ 00:05:41.323 "b3c587bc-e8fa-4f20-99a8-a5879f523521" 00:05:41.323 ], 00:05:41.323 "product_name": "Malloc disk", 00:05:41.323 "block_size": 512, 00:05:41.323 "num_blocks": 16384, 00:05:41.323 "uuid": "b3c587bc-e8fa-4f20-99a8-a5879f523521", 00:05:41.323 "assigned_rate_limits": { 00:05:41.323 "rw_ios_per_sec": 0, 00:05:41.323 "rw_mbytes_per_sec": 0, 00:05:41.323 "r_mbytes_per_sec": 0, 00:05:41.323 "w_mbytes_per_sec": 0 00:05:41.323 }, 00:05:41.323 "claimed": false, 00:05:41.323 "zoned": false, 00:05:41.323 "supported_io_types": { 00:05:41.323 "read": true, 00:05:41.323 "write": true, 00:05:41.323 "unmap": true, 00:05:41.323 "flush": true, 00:05:41.323 "reset": true, 00:05:41.323 "nvme_admin": false, 00:05:41.323 "nvme_io": false, 00:05:41.323 "nvme_io_md": false, 00:05:41.323 "write_zeroes": true, 00:05:41.323 "zcopy": true, 00:05:41.323 "get_zone_info": false, 00:05:41.323 "zone_management": false, 00:05:41.323 "zone_append": false, 00:05:41.323 "compare": false, 00:05:41.323 "compare_and_write": false, 00:05:41.323 "abort": true, 00:05:41.323 "seek_hole": false, 00:05:41.323 "seek_data": false, 00:05:41.323 "copy": true, 00:05:41.323 "nvme_iov_md": false 00:05:41.323 }, 00:05:41.323 "memory_domains": [ 00:05:41.323 { 00:05:41.323 "dma_device_id": "system", 00:05:41.323 "dma_device_type": 1 00:05:41.323 }, 00:05:41.323 { 00:05:41.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.323 "dma_device_type": 2 00:05:41.323 } 00:05:41.323 ], 00:05:41.323 "driver_specific": {} 00:05:41.323 } 00:05:41.323 ]' 00:05:41.323 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.323 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.323 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:41.323 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.323 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.323 [2024-07-24 18:08:49.816050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:41.323 [2024-07-24 18:08:49.816079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.323 [2024-07-24 18:08:49.816093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd185e0 00:05:41.324 [2024-07-24 18:08:49.816101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.324 [2024-07-24 18:08:49.817148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.324 [2024-07-24 18:08:49.817173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.324 Passthru0 00:05:41.324 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.324 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.324 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.324 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.324 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.324 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.324 { 00:05:41.324 "name": "Malloc0", 00:05:41.324 "aliases": [ 00:05:41.324 "b3c587bc-e8fa-4f20-99a8-a5879f523521" 00:05:41.324 ], 00:05:41.324 "product_name": "Malloc disk", 00:05:41.324 "block_size": 512, 00:05:41.324 "num_blocks": 16384, 00:05:41.324 "uuid": "b3c587bc-e8fa-4f20-99a8-a5879f523521", 00:05:41.324 "assigned_rate_limits": { 00:05:41.324 "rw_ios_per_sec": 0, 00:05:41.324 "rw_mbytes_per_sec": 0, 00:05:41.324 "r_mbytes_per_sec": 0, 00:05:41.324 "w_mbytes_per_sec": 0 00:05:41.324 }, 00:05:41.324 "claimed": true, 00:05:41.324 "claim_type": "exclusive_write", 00:05:41.324 "zoned": false, 00:05:41.324 "supported_io_types": { 00:05:41.324 "read": true, 00:05:41.324 "write": true, 00:05:41.324 "unmap": true, 00:05:41.324 "flush": true, 00:05:41.324 "reset": true, 00:05:41.324 "nvme_admin": false, 00:05:41.324 "nvme_io": false, 00:05:41.324 "nvme_io_md": false, 00:05:41.324 "write_zeroes": true, 00:05:41.324 "zcopy": true, 00:05:41.324 "get_zone_info": false, 00:05:41.324 "zone_management": false, 00:05:41.324 "zone_append": false, 00:05:41.324 "compare": false, 00:05:41.324 "compare_and_write": false, 00:05:41.324 "abort": true, 00:05:41.324 "seek_hole": false, 00:05:41.324 "seek_data": false, 00:05:41.324 "copy": true, 00:05:41.324 "nvme_iov_md": false 00:05:41.324 }, 00:05:41.324 "memory_domains": [ 00:05:41.324 { 00:05:41.324 "dma_device_id": "system", 00:05:41.324 "dma_device_type": 1 00:05:41.324 }, 00:05:41.324 { 00:05:41.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.324 "dma_device_type": 2 00:05:41.324 } 00:05:41.324 ], 00:05:41.324 "driver_specific": {} 00:05:41.324 }, 00:05:41.324 { 00:05:41.324 "name": "Passthru0", 00:05:41.324 "aliases": [ 00:05:41.324 "1890eab9-3b72-54c6-a59f-312b489639c7" 00:05:41.324 ], 00:05:41.324 "product_name": "passthru", 00:05:41.324 "block_size": 512, 00:05:41.324 "num_blocks": 16384, 00:05:41.324 "uuid": "1890eab9-3b72-54c6-a59f-312b489639c7", 00:05:41.324 "assigned_rate_limits": { 00:05:41.324 "rw_ios_per_sec": 0, 00:05:41.324 "rw_mbytes_per_sec": 0, 00:05:41.324 "r_mbytes_per_sec": 0, 00:05:41.324 "w_mbytes_per_sec": 0 00:05:41.324 }, 00:05:41.324 "claimed": false, 00:05:41.324 "zoned": false, 00:05:41.324 "supported_io_types": { 00:05:41.324 "read": true, 00:05:41.324 "write": true, 00:05:41.324 "unmap": true, 00:05:41.324 "flush": true, 00:05:41.324 "reset": true, 00:05:41.324 "nvme_admin": false, 00:05:41.324 "nvme_io": false, 00:05:41.324 "nvme_io_md": false, 00:05:41.324 "write_zeroes": true, 00:05:41.324 "zcopy": true, 00:05:41.324 "get_zone_info": false, 00:05:41.324 "zone_management": false, 00:05:41.324 "zone_append": false, 00:05:41.324 "compare": false, 00:05:41.324 "compare_and_write": false, 00:05:41.324 "abort": true, 00:05:41.324 "seek_hole": false, 00:05:41.324 "seek_data": false, 00:05:41.324 "copy": true, 00:05:41.324 "nvme_iov_md": false 00:05:41.324 }, 00:05:41.324 "memory_domains": [ 00:05:41.324 { 00:05:41.324 "dma_device_id": "system", 00:05:41.324 "dma_device_type": 1 00:05:41.324 }, 00:05:41.324 { 00:05:41.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.324 "dma_device_type": 2 00:05:41.324 } 00:05:41.324 ], 00:05:41.324 "driver_specific": { 00:05:41.324 "passthru": { 00:05:41.324 "name": "Passthru0", 00:05:41.324 "base_bdev_name": "Malloc0" 00:05:41.324 } 00:05:41.324 } 00:05:41.324 } 00:05:41.324 ]' 00:05:41.324 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.324 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.324 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.324 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.324 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.324 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.324 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:41.324 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.324 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.324 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.324 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.324 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.324 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.584 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.584 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.584 18:08:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.584 00:05:41.584 real 0m0.288s 00:05:41.584 user 0m0.168s 00:05:41.584 sys 0m0.058s 00:05:41.584 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.584 18:08:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 ************************************ 00:05:41.584 END TEST rpc_integrity 00:05:41.584 ************************************ 00:05:41.584 18:08:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:41.584 18:08:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.584 18:08:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.584 18:08:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 ************************************ 00:05:41.584 START TEST rpc_plugins 00:05:41.584 ************************************ 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:41.584 18:08:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.584 18:08:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:41.584 18:08:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.584 18:08:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:41.584 { 00:05:41.584 "name": "Malloc1", 00:05:41.584 "aliases": [ 00:05:41.584 "2aa75036-0e41-4d64-b07c-58a10f26beb5" 00:05:41.584 ], 00:05:41.584 "product_name": "Malloc disk", 00:05:41.584 "block_size": 4096, 00:05:41.584 "num_blocks": 256, 00:05:41.584 "uuid": "2aa75036-0e41-4d64-b07c-58a10f26beb5", 00:05:41.584 "assigned_rate_limits": { 00:05:41.584 "rw_ios_per_sec": 0, 00:05:41.584 "rw_mbytes_per_sec": 0, 00:05:41.584 "r_mbytes_per_sec": 0, 00:05:41.584 "w_mbytes_per_sec": 0 00:05:41.584 }, 00:05:41.584 "claimed": false, 00:05:41.584 "zoned": false, 00:05:41.584 "supported_io_types": { 00:05:41.584 "read": true, 00:05:41.584 "write": true, 00:05:41.584 "unmap": true, 00:05:41.584 "flush": true, 00:05:41.584 "reset": true, 00:05:41.584 "nvme_admin": false, 00:05:41.584 "nvme_io": false, 00:05:41.584 "nvme_io_md": false, 00:05:41.584 "write_zeroes": true, 00:05:41.584 "zcopy": true, 00:05:41.584 "get_zone_info": false, 00:05:41.584 "zone_management": false, 00:05:41.584 "zone_append": false, 00:05:41.584 "compare": false, 00:05:41.584 "compare_and_write": false, 00:05:41.584 "abort": true, 00:05:41.584 "seek_hole": false, 00:05:41.584 "seek_data": false, 00:05:41.584 "copy": true, 00:05:41.584 "nvme_iov_md": false 00:05:41.584 }, 00:05:41.584 "memory_domains": [ 00:05:41.584 { 00:05:41.584 "dma_device_id": "system", 00:05:41.584 "dma_device_type": 1 00:05:41.584 }, 00:05:41.584 { 00:05:41.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.584 "dma_device_type": 2 00:05:41.584 } 00:05:41.584 ], 00:05:41.584 "driver_specific": {} 00:05:41.584 } 00:05:41.584 ]' 00:05:41.584 18:08:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:41.584 18:08:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:41.584 18:08:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.584 18:08:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.584 18:08:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:41.584 18:08:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:41.584 18:08:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:41.584 00:05:41.584 real 0m0.130s 00:05:41.584 user 0m0.079s 00:05:41.584 sys 0m0.021s 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.584 18:08:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.584 ************************************ 00:05:41.584 END TEST rpc_plugins 00:05:41.584 ************************************ 00:05:41.843 18:08:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:41.843 18:08:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.843 18:08:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.843 18:08:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.843 ************************************ 00:05:41.843 START TEST rpc_trace_cmd_test 00:05:41.843 ************************************ 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:41.843 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2102498", 00:05:41.843 "tpoint_group_mask": "0x8", 00:05:41.843 "iscsi_conn": { 00:05:41.843 "mask": "0x2", 00:05:41.843 "tpoint_mask": "0x0" 00:05:41.843 }, 00:05:41.843 "scsi": { 00:05:41.843 "mask": "0x4", 00:05:41.843 "tpoint_mask": "0x0" 00:05:41.843 }, 00:05:41.843 "bdev": { 00:05:41.843 "mask": "0x8", 00:05:41.843 "tpoint_mask": "0xffffffffffffffff" 00:05:41.843 }, 00:05:41.843 "nvmf_rdma": { 00:05:41.843 "mask": "0x10", 00:05:41.843 "tpoint_mask": "0x0" 00:05:41.843 }, 00:05:41.843 "nvmf_tcp": { 00:05:41.843 "mask": "0x20", 00:05:41.843 "tpoint_mask": "0x0" 00:05:41.843 }, 00:05:41.843 "ftl": { 00:05:41.843 "mask": "0x40", 00:05:41.843 "tpoint_mask": "0x0" 00:05:41.843 }, 00:05:41.843 "blobfs": { 00:05:41.843 "mask": "0x80", 00:05:41.843 "tpoint_mask": "0x0" 00:05:41.843 }, 00:05:41.843 "dsa": { 00:05:41.843 "mask": "0x200", 00:05:41.843 "tpoint_mask": "0x0" 00:05:41.843 }, 00:05:41.843 "thread": { 00:05:41.843 "mask": "0x400", 00:05:41.843 "tpoint_mask": "0x0" 00:05:41.843 }, 00:05:41.843 "nvme_pcie": { 00:05:41.843 "mask": "0x800", 00:05:41.843 "tpoint_mask": "0x0" 00:05:41.843 }, 00:05:41.843 "iaa": { 00:05:41.843 "mask": "0x1000", 00:05:41.843 "tpoint_mask": "0x0" 00:05:41.843 }, 00:05:41.843 "nvme_tcp": { 00:05:41.843 "mask": "0x2000", 00:05:41.843 "tpoint_mask": "0x0" 00:05:41.843 }, 00:05:41.843 "bdev_nvme": { 00:05:41.843 "mask": "0x4000", 00:05:41.843 "tpoint_mask": "0x0" 00:05:41.843 }, 00:05:41.843 "sock": { 00:05:41.843 "mask": "0x8000", 00:05:41.843 "tpoint_mask": "0x0" 00:05:41.843 } 00:05:41.843 }' 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:41.843 18:08:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.103 18:08:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.103 18:08:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.103 00:05:42.103 real 0m0.225s 00:05:42.103 user 0m0.183s 00:05:42.103 sys 0m0.035s 00:05:42.103 18:08:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.103 18:08:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.103 ************************************ 00:05:42.103 END TEST rpc_trace_cmd_test 00:05:42.103 ************************************ 00:05:42.103 18:08:50 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.103 18:08:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.103 18:08:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.103 18:08:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.103 18:08:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.103 18:08:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.103 ************************************ 00:05:42.103 START TEST rpc_daemon_integrity 00:05:42.103 ************************************ 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.103 { 00:05:42.103 "name": "Malloc2", 00:05:42.103 "aliases": [ 00:05:42.103 "a7fa59ff-a2e5-4aa7-a109-0d6dee61e3c1" 00:05:42.103 ], 00:05:42.103 "product_name": "Malloc disk", 00:05:42.103 "block_size": 512, 00:05:42.103 "num_blocks": 16384, 00:05:42.103 "uuid": "a7fa59ff-a2e5-4aa7-a109-0d6dee61e3c1", 00:05:42.103 "assigned_rate_limits": { 00:05:42.103 "rw_ios_per_sec": 0, 00:05:42.103 "rw_mbytes_per_sec": 0, 00:05:42.103 "r_mbytes_per_sec": 0, 00:05:42.103 "w_mbytes_per_sec": 0 00:05:42.103 }, 00:05:42.103 "claimed": false, 00:05:42.103 "zoned": false, 00:05:42.103 "supported_io_types": { 00:05:42.103 "read": true, 00:05:42.103 "write": true, 00:05:42.103 "unmap": true, 00:05:42.103 "flush": true, 00:05:42.103 "reset": true, 00:05:42.103 "nvme_admin": false, 00:05:42.103 "nvme_io": false, 00:05:42.103 "nvme_io_md": false, 00:05:42.103 "write_zeroes": true, 00:05:42.103 "zcopy": true, 00:05:42.103 "get_zone_info": false, 00:05:42.103 "zone_management": false, 00:05:42.103 "zone_append": false, 00:05:42.103 "compare": false, 00:05:42.103 "compare_and_write": false, 00:05:42.103 "abort": true, 00:05:42.103 "seek_hole": false, 00:05:42.103 "seek_data": false, 00:05:42.103 "copy": true, 00:05:42.103 "nvme_iov_md": false 00:05:42.103 }, 00:05:42.103 "memory_domains": [ 00:05:42.103 { 00:05:42.103 "dma_device_id": "system", 00:05:42.103 "dma_device_type": 1 00:05:42.103 }, 00:05:42.103 { 00:05:42.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.103 "dma_device_type": 2 00:05:42.103 } 00:05:42.103 ], 00:05:42.103 "driver_specific": {} 00:05:42.103 } 00:05:42.103 ]' 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.103 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.103 [2024-07-24 18:08:50.694431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.103 [2024-07-24 18:08:50.694460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.103 [2024-07-24 18:08:50.694472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdb64d0 00:05:42.103 [2024-07-24 18:08:50.694481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.103 [2024-07-24 18:08:50.695402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.103 [2024-07-24 18:08:50.695424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.362 Passthru0 00:05:42.362 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.362 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.362 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.362 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.362 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.362 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.362 { 00:05:42.362 "name": "Malloc2", 00:05:42.362 "aliases": [ 00:05:42.362 "a7fa59ff-a2e5-4aa7-a109-0d6dee61e3c1" 00:05:42.362 ], 00:05:42.362 "product_name": "Malloc disk", 00:05:42.362 "block_size": 512, 00:05:42.362 "num_blocks": 16384, 00:05:42.362 "uuid": "a7fa59ff-a2e5-4aa7-a109-0d6dee61e3c1", 00:05:42.362 "assigned_rate_limits": { 00:05:42.362 "rw_ios_per_sec": 0, 00:05:42.362 "rw_mbytes_per_sec": 0, 00:05:42.362 "r_mbytes_per_sec": 0, 00:05:42.362 "w_mbytes_per_sec": 0 00:05:42.362 }, 00:05:42.362 "claimed": true, 00:05:42.362 "claim_type": "exclusive_write", 00:05:42.362 "zoned": false, 00:05:42.362 "supported_io_types": { 00:05:42.362 "read": true, 00:05:42.362 "write": true, 00:05:42.362 "unmap": true, 00:05:42.362 "flush": true, 00:05:42.362 "reset": true, 00:05:42.362 "nvme_admin": false, 00:05:42.362 "nvme_io": false, 00:05:42.362 "nvme_io_md": false, 00:05:42.362 "write_zeroes": true, 00:05:42.362 "zcopy": true, 00:05:42.362 "get_zone_info": false, 00:05:42.362 "zone_management": false, 00:05:42.362 "zone_append": false, 00:05:42.362 "compare": false, 00:05:42.362 "compare_and_write": false, 00:05:42.362 "abort": true, 00:05:42.362 "seek_hole": false, 00:05:42.362 "seek_data": false, 00:05:42.362 "copy": true, 00:05:42.362 "nvme_iov_md": false 00:05:42.362 }, 00:05:42.362 "memory_domains": [ 00:05:42.362 { 00:05:42.362 "dma_device_id": "system", 00:05:42.362 "dma_device_type": 1 00:05:42.362 }, 00:05:42.362 { 00:05:42.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.362 "dma_device_type": 2 00:05:42.362 } 00:05:42.362 ], 00:05:42.362 "driver_specific": {} 00:05:42.362 }, 00:05:42.362 { 00:05:42.362 "name": "Passthru0", 00:05:42.362 "aliases": [ 00:05:42.362 "bb0e8d8d-5cbf-569e-9dc7-0ccc3a04caa3" 00:05:42.362 ], 00:05:42.362 "product_name": "passthru", 00:05:42.362 "block_size": 512, 00:05:42.362 "num_blocks": 16384, 00:05:42.362 "uuid": "bb0e8d8d-5cbf-569e-9dc7-0ccc3a04caa3", 00:05:42.362 "assigned_rate_limits": { 00:05:42.362 "rw_ios_per_sec": 0, 00:05:42.362 "rw_mbytes_per_sec": 0, 00:05:42.362 "r_mbytes_per_sec": 0, 00:05:42.362 "w_mbytes_per_sec": 0 00:05:42.362 }, 00:05:42.362 "claimed": false, 00:05:42.362 "zoned": false, 00:05:42.362 "supported_io_types": { 00:05:42.362 "read": true, 00:05:42.362 "write": true, 00:05:42.362 "unmap": true, 00:05:42.362 "flush": true, 00:05:42.362 "reset": true, 00:05:42.362 "nvme_admin": false, 00:05:42.362 "nvme_io": false, 00:05:42.362 "nvme_io_md": false, 00:05:42.362 "write_zeroes": true, 00:05:42.362 "zcopy": true, 00:05:42.362 "get_zone_info": false, 00:05:42.362 "zone_management": false, 00:05:42.362 "zone_append": false, 00:05:42.362 "compare": false, 00:05:42.362 "compare_and_write": false, 00:05:42.362 "abort": true, 00:05:42.362 "seek_hole": false, 00:05:42.362 "seek_data": false, 00:05:42.362 "copy": true, 00:05:42.362 "nvme_iov_md": false 00:05:42.362 }, 00:05:42.362 "memory_domains": [ 00:05:42.362 { 00:05:42.362 "dma_device_id": "system", 00:05:42.362 "dma_device_type": 1 00:05:42.363 }, 00:05:42.363 { 00:05:42.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.363 "dma_device_type": 2 00:05:42.363 } 00:05:42.363 ], 00:05:42.363 "driver_specific": { 00:05:42.363 "passthru": { 00:05:42.363 "name": "Passthru0", 00:05:42.363 "base_bdev_name": "Malloc2" 00:05:42.363 } 00:05:42.363 } 00:05:42.363 } 00:05:42.363 ]' 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.363 00:05:42.363 real 0m0.290s 00:05:42.363 user 0m0.181s 00:05:42.363 sys 0m0.054s 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.363 18:08:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.363 ************************************ 00:05:42.363 END TEST rpc_daemon_integrity 00:05:42.363 ************************************ 00:05:42.363 18:08:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:42.363 18:08:50 rpc -- rpc/rpc.sh@84 -- # killprocess 2102498 00:05:42.363 18:08:50 rpc -- common/autotest_common.sh@950 -- # '[' -z 2102498 ']' 00:05:42.363 18:08:50 rpc -- common/autotest_common.sh@954 -- # kill -0 2102498 00:05:42.363 18:08:50 rpc -- common/autotest_common.sh@955 -- # uname 00:05:42.363 18:08:50 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.363 18:08:50 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2102498 00:05:42.363 18:08:50 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.363 18:08:50 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.363 18:08:50 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2102498' 00:05:42.363 killing process with pid 2102498 00:05:42.363 18:08:50 rpc -- common/autotest_common.sh@969 -- # kill 2102498 00:05:42.363 18:08:50 rpc -- common/autotest_common.sh@974 -- # wait 2102498 00:05:42.931 00:05:42.931 real 0m2.575s 00:05:42.931 user 0m3.228s 00:05:42.931 sys 0m0.850s 00:05:42.931 18:08:51 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.931 18:08:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.931 ************************************ 00:05:42.931 END TEST rpc 00:05:42.931 ************************************ 00:05:42.931 18:08:51 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:42.931 18:08:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.931 18:08:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.931 18:08:51 -- common/autotest_common.sh@10 -- # set +x 00:05:42.931 ************************************ 00:05:42.931 START TEST skip_rpc 00:05:42.932 ************************************ 00:05:42.932 18:08:51 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:42.932 * Looking for test storage... 00:05:42.932 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:42.932 18:08:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:42.932 18:08:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:42.932 18:08:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:42.932 18:08:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.932 18:08:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.932 18:08:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.932 ************************************ 00:05:42.932 START TEST skip_rpc 00:05:42.932 ************************************ 00:05:42.932 18:08:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:42.932 18:08:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2103052 00:05:42.932 18:08:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:42.932 18:08:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.932 18:08:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.192 [2024-07-24 18:08:51.534540] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:05:43.192 [2024-07-24 18:08:51.534584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2103052 ] 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:01.0 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:01.1 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:01.2 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:01.3 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:01.4 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:01.5 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:01.6 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:01.7 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:02.0 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:02.1 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:02.2 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:02.3 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:02.4 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:02.5 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:02.6 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b3:02.7 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:01.0 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:01.1 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:01.2 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:01.3 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:01.4 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:01.5 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:01.6 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:01.7 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:02.0 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:02.1 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:02.2 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:02.3 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:02.4 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:02.5 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:02.6 cannot be used 00:05:43.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:43.192 EAL: Requested device 0000:b5:02.7 cannot be used 00:05:43.192 [2024-07-24 18:08:51.628276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.192 [2024-07-24 18:08:51.697119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2103052 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2103052 ']' 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2103052 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2103052 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2103052' 00:05:48.470 killing process with pid 2103052 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2103052 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2103052 00:05:48.470 00:05:48.470 real 0m5.372s 00:05:48.470 user 0m5.086s 00:05:48.470 sys 0m0.317s 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.470 18:08:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.470 ************************************ 00:05:48.470 END TEST skip_rpc 00:05:48.470 ************************************ 00:05:48.470 18:08:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:48.470 18:08:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.470 18:08:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.470 18:08:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.470 ************************************ 00:05:48.470 START TEST skip_rpc_with_json 00:05:48.470 ************************************ 00:05:48.470 18:08:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:48.470 18:08:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:48.470 18:08:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2104042 00:05:48.470 18:08:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.470 18:08:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.470 18:08:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2104042 00:05:48.470 18:08:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2104042 ']' 00:05:48.470 18:08:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.470 18:08:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.470 18:08:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.470 18:08:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.470 18:08:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.471 [2024-07-24 18:08:57.000618] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:05:48.471 [2024-07-24 18:08:57.000669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104042 ] 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:01.0 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:01.1 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:01.2 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:01.3 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:01.4 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:01.5 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:01.6 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:01.7 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:02.0 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:02.1 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:02.2 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:02.3 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:02.4 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:02.5 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:02.6 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b3:02.7 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:01.0 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:01.1 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:01.2 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:01.3 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:01.4 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:01.5 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:01.6 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:01.7 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:02.0 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:02.1 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:02.2 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:02.3 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:02.4 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:02.5 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:02.6 cannot be used 00:05:48.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:48.471 EAL: Requested device 0000:b5:02.7 cannot be used 00:05:48.730 [2024-07-24 18:08:57.094101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.730 [2024-07-24 18:08:57.166986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.297 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.297 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:49.297 18:08:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:49.297 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.297 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.297 [2024-07-24 18:08:57.793515] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:49.297 request: 00:05:49.297 { 00:05:49.297 "trtype": "tcp", 00:05:49.297 "method": "nvmf_get_transports", 00:05:49.297 "req_id": 1 00:05:49.297 } 00:05:49.297 Got JSON-RPC error response 00:05:49.297 response: 00:05:49.297 { 00:05:49.297 "code": -19, 00:05:49.297 "message": "No such device" 00:05:49.297 } 00:05:49.297 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:49.297 18:08:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:49.297 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.297 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.297 [2024-07-24 18:08:57.805622] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.297 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.297 18:08:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:49.297 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.297 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.557 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.557 18:08:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:49.557 { 00:05:49.557 "subsystems": [ 00:05:49.557 { 00:05:49.557 "subsystem": "keyring", 00:05:49.557 "config": [] 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "subsystem": "iobuf", 00:05:49.557 "config": [ 00:05:49.557 { 00:05:49.557 "method": "iobuf_set_options", 00:05:49.557 "params": { 00:05:49.557 "small_pool_count": 8192, 00:05:49.557 "large_pool_count": 1024, 00:05:49.557 "small_bufsize": 8192, 00:05:49.557 "large_bufsize": 135168 00:05:49.557 } 00:05:49.557 } 00:05:49.557 ] 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "subsystem": "sock", 00:05:49.557 "config": [ 00:05:49.557 { 00:05:49.557 "method": "sock_set_default_impl", 00:05:49.557 "params": { 00:05:49.557 "impl_name": "posix" 00:05:49.557 } 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "method": "sock_impl_set_options", 00:05:49.557 "params": { 00:05:49.557 "impl_name": "ssl", 00:05:49.557 "recv_buf_size": 4096, 00:05:49.557 "send_buf_size": 4096, 00:05:49.557 "enable_recv_pipe": true, 00:05:49.557 "enable_quickack": false, 00:05:49.557 "enable_placement_id": 0, 00:05:49.557 "enable_zerocopy_send_server": true, 00:05:49.557 "enable_zerocopy_send_client": false, 00:05:49.557 "zerocopy_threshold": 0, 00:05:49.557 "tls_version": 0, 00:05:49.557 "enable_ktls": false 00:05:49.557 } 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "method": "sock_impl_set_options", 00:05:49.557 "params": { 00:05:49.557 "impl_name": "posix", 00:05:49.557 "recv_buf_size": 2097152, 00:05:49.557 "send_buf_size": 2097152, 00:05:49.557 "enable_recv_pipe": true, 00:05:49.557 "enable_quickack": false, 00:05:49.557 "enable_placement_id": 0, 00:05:49.557 "enable_zerocopy_send_server": true, 00:05:49.557 "enable_zerocopy_send_client": false, 00:05:49.557 "zerocopy_threshold": 0, 00:05:49.557 "tls_version": 0, 00:05:49.557 "enable_ktls": false 00:05:49.557 } 00:05:49.557 } 00:05:49.557 ] 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "subsystem": "vmd", 00:05:49.557 "config": [] 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "subsystem": "accel", 00:05:49.557 "config": [ 00:05:49.557 { 00:05:49.557 "method": "accel_set_options", 00:05:49.557 "params": { 00:05:49.557 "small_cache_size": 128, 00:05:49.557 "large_cache_size": 16, 00:05:49.557 "task_count": 2048, 00:05:49.557 "sequence_count": 2048, 00:05:49.557 "buf_count": 2048 00:05:49.557 } 00:05:49.557 } 00:05:49.557 ] 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "subsystem": "bdev", 00:05:49.557 "config": [ 00:05:49.557 { 00:05:49.557 "method": "bdev_set_options", 00:05:49.557 "params": { 00:05:49.557 "bdev_io_pool_size": 65535, 00:05:49.557 "bdev_io_cache_size": 256, 00:05:49.557 "bdev_auto_examine": true, 00:05:49.557 "iobuf_small_cache_size": 128, 00:05:49.557 "iobuf_large_cache_size": 16 00:05:49.557 } 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "method": "bdev_raid_set_options", 00:05:49.557 "params": { 00:05:49.557 "process_window_size_kb": 1024, 00:05:49.557 "process_max_bandwidth_mb_sec": 0 00:05:49.557 } 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "method": "bdev_iscsi_set_options", 00:05:49.557 "params": { 00:05:49.557 "timeout_sec": 30 00:05:49.557 } 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "method": "bdev_nvme_set_options", 00:05:49.557 "params": { 00:05:49.557 "action_on_timeout": "none", 00:05:49.557 "timeout_us": 0, 00:05:49.557 "timeout_admin_us": 0, 00:05:49.557 "keep_alive_timeout_ms": 10000, 00:05:49.557 "arbitration_burst": 0, 00:05:49.557 "low_priority_weight": 0, 00:05:49.557 "medium_priority_weight": 0, 00:05:49.557 "high_priority_weight": 0, 00:05:49.557 "nvme_adminq_poll_period_us": 10000, 00:05:49.557 "nvme_ioq_poll_period_us": 0, 00:05:49.557 "io_queue_requests": 0, 00:05:49.557 "delay_cmd_submit": true, 00:05:49.557 "transport_retry_count": 4, 00:05:49.557 "bdev_retry_count": 3, 00:05:49.557 "transport_ack_timeout": 0, 00:05:49.557 "ctrlr_loss_timeout_sec": 0, 00:05:49.557 "reconnect_delay_sec": 0, 00:05:49.557 "fast_io_fail_timeout_sec": 0, 00:05:49.557 "disable_auto_failback": false, 00:05:49.557 "generate_uuids": false, 00:05:49.557 "transport_tos": 0, 00:05:49.557 "nvme_error_stat": false, 00:05:49.557 "rdma_srq_size": 0, 00:05:49.557 "io_path_stat": false, 00:05:49.557 "allow_accel_sequence": false, 00:05:49.557 "rdma_max_cq_size": 0, 00:05:49.557 "rdma_cm_event_timeout_ms": 0, 00:05:49.557 "dhchap_digests": [ 00:05:49.557 "sha256", 00:05:49.557 "sha384", 00:05:49.557 "sha512" 00:05:49.557 ], 00:05:49.557 "dhchap_dhgroups": [ 00:05:49.557 "null", 00:05:49.557 "ffdhe2048", 00:05:49.557 "ffdhe3072", 00:05:49.557 "ffdhe4096", 00:05:49.557 "ffdhe6144", 00:05:49.557 "ffdhe8192" 00:05:49.557 ] 00:05:49.557 } 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "method": "bdev_nvme_set_hotplug", 00:05:49.557 "params": { 00:05:49.557 "period_us": 100000, 00:05:49.557 "enable": false 00:05:49.557 } 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "method": "bdev_wait_for_examine" 00:05:49.557 } 00:05:49.557 ] 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "subsystem": "scsi", 00:05:49.557 "config": null 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "subsystem": "scheduler", 00:05:49.557 "config": [ 00:05:49.557 { 00:05:49.557 "method": "framework_set_scheduler", 00:05:49.557 "params": { 00:05:49.557 "name": "static" 00:05:49.557 } 00:05:49.557 } 00:05:49.557 ] 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "subsystem": "vhost_scsi", 00:05:49.557 "config": [] 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "subsystem": "vhost_blk", 00:05:49.557 "config": [] 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "subsystem": "ublk", 00:05:49.557 "config": [] 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "subsystem": "nbd", 00:05:49.557 "config": [] 00:05:49.557 }, 00:05:49.557 { 00:05:49.557 "subsystem": "nvmf", 00:05:49.557 "config": [ 00:05:49.558 { 00:05:49.558 "method": "nvmf_set_config", 00:05:49.558 "params": { 00:05:49.558 "discovery_filter": "match_any", 00:05:49.558 "admin_cmd_passthru": { 00:05:49.558 "identify_ctrlr": false 00:05:49.558 } 00:05:49.558 } 00:05:49.558 }, 00:05:49.558 { 00:05:49.558 "method": "nvmf_set_max_subsystems", 00:05:49.558 "params": { 00:05:49.558 "max_subsystems": 1024 00:05:49.558 } 00:05:49.558 }, 00:05:49.558 { 00:05:49.558 "method": "nvmf_set_crdt", 00:05:49.558 "params": { 00:05:49.558 "crdt1": 0, 00:05:49.558 "crdt2": 0, 00:05:49.558 "crdt3": 0 00:05:49.558 } 00:05:49.558 }, 00:05:49.558 { 00:05:49.558 "method": "nvmf_create_transport", 00:05:49.558 "params": { 00:05:49.558 "trtype": "TCP", 00:05:49.558 "max_queue_depth": 128, 00:05:49.558 "max_io_qpairs_per_ctrlr": 127, 00:05:49.558 "in_capsule_data_size": 4096, 00:05:49.558 "max_io_size": 131072, 00:05:49.558 "io_unit_size": 131072, 00:05:49.558 "max_aq_depth": 128, 00:05:49.558 "num_shared_buffers": 511, 00:05:49.558 "buf_cache_size": 4294967295, 00:05:49.558 "dif_insert_or_strip": false, 00:05:49.558 "zcopy": false, 00:05:49.558 "c2h_success": true, 00:05:49.558 "sock_priority": 0, 00:05:49.558 "abort_timeout_sec": 1, 00:05:49.558 "ack_timeout": 0, 00:05:49.558 "data_wr_pool_size": 0 00:05:49.558 } 00:05:49.558 } 00:05:49.558 ] 00:05:49.558 }, 00:05:49.558 { 00:05:49.558 "subsystem": "iscsi", 00:05:49.558 "config": [ 00:05:49.558 { 00:05:49.558 "method": "iscsi_set_options", 00:05:49.558 "params": { 00:05:49.558 "node_base": "iqn.2016-06.io.spdk", 00:05:49.558 "max_sessions": 128, 00:05:49.558 "max_connections_per_session": 2, 00:05:49.558 "max_queue_depth": 64, 00:05:49.558 "default_time2wait": 2, 00:05:49.558 "default_time2retain": 20, 00:05:49.558 "first_burst_length": 8192, 00:05:49.558 "immediate_data": true, 00:05:49.558 "allow_duplicated_isid": false, 00:05:49.558 "error_recovery_level": 0, 00:05:49.558 "nop_timeout": 60, 00:05:49.558 "nop_in_interval": 30, 00:05:49.558 "disable_chap": false, 00:05:49.558 "require_chap": false, 00:05:49.558 "mutual_chap": false, 00:05:49.558 "chap_group": 0, 00:05:49.558 "max_large_datain_per_connection": 64, 00:05:49.558 "max_r2t_per_connection": 4, 00:05:49.558 "pdu_pool_size": 36864, 00:05:49.558 "immediate_data_pool_size": 16384, 00:05:49.558 "data_out_pool_size": 2048 00:05:49.558 } 00:05:49.558 } 00:05:49.558 ] 00:05:49.558 } 00:05:49.558 ] 00:05:49.558 } 00:05:49.558 18:08:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:49.558 18:08:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2104042 00:05:49.558 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2104042 ']' 00:05:49.558 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2104042 00:05:49.558 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:49.558 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.558 18:08:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2104042 00:05:49.558 18:08:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.558 18:08:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.558 18:08:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2104042' 00:05:49.558 killing process with pid 2104042 00:05:49.558 18:08:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2104042 00:05:49.558 18:08:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2104042 00:05:49.817 18:08:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2104321 00:05:49.817 18:08:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:49.817 18:08:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:55.089 18:09:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2104321 00:05:55.089 18:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2104321 ']' 00:05:55.089 18:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2104321 00:05:55.089 18:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:55.089 18:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.089 18:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2104321 00:05:55.089 18:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.089 18:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.089 18:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2104321' 00:05:55.089 killing process with pid 2104321 00:05:55.089 18:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2104321 00:05:55.089 18:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2104321 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:55.349 00:05:55.349 real 0m6.770s 00:05:55.349 user 0m6.511s 00:05:55.349 sys 0m0.700s 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.349 ************************************ 00:05:55.349 END TEST skip_rpc_with_json 00:05:55.349 ************************************ 00:05:55.349 18:09:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:55.349 18:09:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.349 18:09:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.349 18:09:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.349 ************************************ 00:05:55.349 START TEST skip_rpc_with_delay 00:05:55.349 ************************************ 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.349 [2024-07-24 18:09:03.859877] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:55.349 [2024-07-24 18:09:03.859946] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.349 00:05:55.349 real 0m0.077s 00:05:55.349 user 0m0.044s 00:05:55.349 sys 0m0.033s 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.349 18:09:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:55.349 ************************************ 00:05:55.349 END TEST skip_rpc_with_delay 00:05:55.349 ************************************ 00:05:55.349 18:09:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:55.349 18:09:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:55.349 18:09:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:55.349 18:09:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.349 18:09:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.349 18:09:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.658 ************************************ 00:05:55.658 START TEST exit_on_failed_rpc_init 00:05:55.658 ************************************ 00:05:55.658 18:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:55.658 18:09:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2105517 00:05:55.658 18:09:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2105517 00:05:55.658 18:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2105517 ']' 00:05:55.658 18:09:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.658 18:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.658 18:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.658 18:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.658 18:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.658 18:09:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.658 [2024-07-24 18:09:04.021505] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:05:55.658 [2024-07-24 18:09:04.021551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105517 ] 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:01.0 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:01.1 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:01.2 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:01.3 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:01.4 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:01.5 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:01.6 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:01.7 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:02.0 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:02.1 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:02.2 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:02.3 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:02.4 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:02.5 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:02.6 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b3:02.7 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b5:01.0 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b5:01.1 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b5:01.2 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b5:01.3 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b5:01.4 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b5:01.5 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b5:01.6 cannot be used 00:05:55.658 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.658 EAL: Requested device 0000:b5:01.7 cannot be used 00:05:55.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.659 EAL: Requested device 0000:b5:02.0 cannot be used 00:05:55.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.659 EAL: Requested device 0000:b5:02.1 cannot be used 00:05:55.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.659 EAL: Requested device 0000:b5:02.2 cannot be used 00:05:55.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.659 EAL: Requested device 0000:b5:02.3 cannot be used 00:05:55.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.659 EAL: Requested device 0000:b5:02.4 cannot be used 00:05:55.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.659 EAL: Requested device 0000:b5:02.5 cannot be used 00:05:55.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.659 EAL: Requested device 0000:b5:02.6 cannot be used 00:05:55.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:55.659 EAL: Requested device 0000:b5:02.7 cannot be used 00:05:55.659 [2024-07-24 18:09:04.116744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.659 [2024-07-24 18:09:04.189015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.240 18:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.240 18:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:56.240 18:09:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.240 18:09:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.240 18:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:56.240 18:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.240 18:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.240 18:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.240 18:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.240 18:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.240 18:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.240 18:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.240 18:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.241 18:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:56.241 18:09:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.500 [2024-07-24 18:09:04.869240] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:05:56.500 [2024-07-24 18:09:04.869292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105573 ] 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:01.0 cannot be used 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:01.1 cannot be used 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:01.2 cannot be used 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:01.3 cannot be used 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:01.4 cannot be used 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:01.5 cannot be used 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:01.6 cannot be used 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:01.7 cannot be used 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:02.0 cannot be used 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:02.1 cannot be used 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:02.2 cannot be used 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:02.3 cannot be used 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:02.4 cannot be used 00:05:56.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.500 EAL: Requested device 0000:b3:02.5 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b3:02.6 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b3:02.7 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:01.0 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:01.1 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:01.2 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:01.3 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:01.4 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:01.5 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:01.6 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:01.7 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:02.0 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:02.1 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:02.2 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:02.3 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:02.4 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:02.5 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:02.6 cannot be used 00:05:56.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:56.501 EAL: Requested device 0000:b5:02.7 cannot be used 00:05:56.501 [2024-07-24 18:09:04.959899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.501 [2024-07-24 18:09:05.028385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.501 [2024-07-24 18:09:05.028449] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:56.501 [2024-07-24 18:09:05.028461] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:56.501 [2024-07-24 18:09:05.028469] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.760 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:56.760 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.760 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:56.760 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2105517 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2105517 ']' 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2105517 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2105517 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2105517' 00:05:56.761 killing process with pid 2105517 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2105517 00:05:56.761 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2105517 00:05:57.020 00:05:57.020 real 0m1.491s 00:05:57.020 user 0m1.639s 00:05:57.020 sys 0m0.496s 00:05:57.020 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.020 18:09:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.020 ************************************ 00:05:57.020 END TEST exit_on_failed_rpc_init 00:05:57.020 ************************************ 00:05:57.020 18:09:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:57.020 00:05:57.020 real 0m14.161s 00:05:57.020 user 0m13.452s 00:05:57.020 sys 0m1.859s 00:05:57.020 18:09:05 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.020 18:09:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.020 ************************************ 00:05:57.020 END TEST skip_rpc 00:05:57.020 ************************************ 00:05:57.021 18:09:05 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:57.021 18:09:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.021 18:09:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.021 18:09:05 -- common/autotest_common.sh@10 -- # set +x 00:05:57.021 ************************************ 00:05:57.021 START TEST rpc_client 00:05:57.021 ************************************ 00:05:57.021 18:09:05 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:57.280 * Looking for test storage... 00:05:57.280 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:05:57.280 18:09:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:57.280 OK 00:05:57.280 18:09:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:57.280 00:05:57.280 real 0m0.125s 00:05:57.280 user 0m0.045s 00:05:57.280 sys 0m0.089s 00:05:57.280 18:09:05 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.280 18:09:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:57.280 ************************************ 00:05:57.280 END TEST rpc_client 00:05:57.280 ************************************ 00:05:57.280 18:09:05 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:57.280 18:09:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.280 18:09:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.280 18:09:05 -- common/autotest_common.sh@10 -- # set +x 00:05:57.280 ************************************ 00:05:57.280 START TEST json_config 00:05:57.280 ************************************ 00:05:57.280 18:09:05 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:57.280 18:09:05 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.280 18:09:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:57.280 18:09:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.280 18:09:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.280 18:09:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.280 18:09:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.280 18:09:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.280 18:09:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.280 18:09:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.280 18:09:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.280 18:09:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.280 18:09:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:05:57.540 18:09:05 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.540 18:09:05 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.540 18:09:05 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.540 18:09:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.540 18:09:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.540 18:09:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.540 18:09:05 json_config -- paths/export.sh@5 -- # export PATH 00:05:57.540 18:09:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@47 -- # : 0 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:57.540 18:09:05 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.540 18:09:05 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:05:57.540 18:09:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:57.540 18:09:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:57.540 18:09:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:57.540 18:09:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:57.540 18:09:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:57.540 18:09:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:57.540 18:09:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:57.540 18:09:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:57.540 18:09:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:57.541 18:09:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:57.541 18:09:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:05:57.541 18:09:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:57.541 18:09:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:57.541 18:09:05 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.541 18:09:05 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:57.541 INFO: JSON configuration test init 00:05:57.541 18:09:05 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:57.541 18:09:05 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:57.541 18:09:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:57.541 18:09:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.541 18:09:05 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:57.541 18:09:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:57.541 18:09:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.541 18:09:05 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:57.541 18:09:05 json_config -- json_config/common.sh@9 -- # local app=target 00:05:57.541 18:09:05 json_config -- json_config/common.sh@10 -- # shift 00:05:57.541 18:09:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.541 18:09:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.541 18:09:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.541 18:09:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.541 18:09:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.541 18:09:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2105944 00:05:57.541 18:09:05 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:57.541 18:09:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.541 Waiting for target to run... 00:05:57.541 18:09:05 json_config -- json_config/common.sh@25 -- # waitforlisten 2105944 /var/tmp/spdk_tgt.sock 00:05:57.541 18:09:05 json_config -- common/autotest_common.sh@831 -- # '[' -z 2105944 ']' 00:05:57.541 18:09:05 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.541 18:09:05 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.541 18:09:05 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.541 18:09:05 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.541 18:09:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.541 [2024-07-24 18:09:05.951069] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:05:57.541 [2024-07-24 18:09:05.951118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105944 ] 00:05:57.800 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.800 EAL: Requested device 0000:b3:01.0 cannot be used 00:05:57.800 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.800 EAL: Requested device 0000:b3:01.1 cannot be used 00:05:57.800 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.800 EAL: Requested device 0000:b3:01.2 cannot be used 00:05:57.800 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.800 EAL: Requested device 0000:b3:01.3 cannot be used 00:05:57.800 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.800 EAL: Requested device 0000:b3:01.4 cannot be used 00:05:57.800 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.800 EAL: Requested device 0000:b3:01.5 cannot be used 00:05:57.800 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.800 EAL: Requested device 0000:b3:01.6 cannot be used 00:05:57.800 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.800 EAL: Requested device 0000:b3:01.7 cannot be used 00:05:57.800 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.800 EAL: Requested device 0000:b3:02.0 cannot be used 00:05:57.800 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.800 EAL: Requested device 0000:b3:02.1 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b3:02.2 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b3:02.3 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b3:02.4 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b3:02.5 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b3:02.6 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b3:02.7 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:01.0 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:01.1 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:01.2 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:01.3 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:01.4 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:01.5 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:01.6 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:01.7 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:02.0 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:02.1 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:02.2 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:02.3 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:02.4 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:02.5 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:02.6 cannot be used 00:05:57.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:57.801 EAL: Requested device 0000:b5:02.7 cannot be used 00:05:57.801 [2024-07-24 18:09:06.255383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.801 [2024-07-24 18:09:06.320394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.370 18:09:06 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.370 18:09:06 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:58.370 18:09:06 json_config -- json_config/common.sh@26 -- # echo '' 00:05:58.370 00:05:58.370 18:09:06 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:58.370 18:09:06 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:58.370 18:09:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:58.370 18:09:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.370 18:09:06 json_config -- json_config/json_config.sh@99 -- # [[ 1 -eq 1 ]] 00:05:58.370 18:09:06 json_config -- json_config/json_config.sh@100 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:05:58.370 18:09:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:05:58.370 18:09:06 json_config -- json_config/json_config.sh@101 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:58.370 18:09:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:58.630 [2024-07-24 18:09:07.074579] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:05:58.630 18:09:07 json_config -- json_config/json_config.sh@102 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:58.630 18:09:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:58.888 [2024-07-24 18:09:07.230969] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:05:58.888 18:09:07 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:58.888 18:09:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:58.888 18:09:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.888 18:09:07 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:58.888 18:09:07 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:58.888 18:09:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:58.888 [2024-07-24 18:09:07.466678] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:04.165 18:09:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.165 18:09:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:04.165 18:09:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@51 -- # sort 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:04.165 18:09:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.165 18:09:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:06:04.165 18:09:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.165 18:09:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:06:04.165 18:09:12 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:04.165 18:09:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:04.425 18:09:12 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:06:04.425 18:09:12 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:04.425 18:09:12 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:04.425 18:09:12 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:06:04.425 18:09:12 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:06:04.425 18:09:12 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:06:04.425 18:09:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:06:04.684 Nvme0n1p0 Nvme0n1p1 00:06:04.684 18:09:13 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:06:04.684 18:09:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:06:04.684 [2024-07-24 18:09:13.231539] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:04.684 [2024-07-24 18:09:13.231579] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:04.684 00:06:04.684 18:09:13 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:06:04.684 18:09:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:06:04.941 Malloc3 00:06:04.941 18:09:13 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:04.941 18:09:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:05.200 [2024-07-24 18:09:13.572459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:05.200 [2024-07-24 18:09:13.572493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.200 [2024-07-24 18:09:13.572509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1501ef0 00:06:05.200 [2024-07-24 18:09:13.572518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.200 [2024-07-24 18:09:13.573569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.201 [2024-07-24 18:09:13.573611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:05.201 PTBdevFromMalloc3 00:06:05.201 18:09:13 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:06:05.201 18:09:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:06:05.201 Null0 00:06:05.201 18:09:13 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:06:05.201 18:09:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:06:05.460 Malloc0 00:06:05.460 18:09:13 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:06:05.460 18:09:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:06:05.719 Malloc1 00:06:05.719 18:09:14 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:06:05.719 18:09:14 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:06:05.719 102400+0 records in 00:06:05.719 102400+0 records out 00:06:05.720 104857600 bytes (105 MB, 100 MiB) copied, 0.209147 s, 501 MB/s 00:06:05.720 18:09:14 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:06:05.720 18:09:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:06:05.979 aio_disk 00:06:05.979 18:09:14 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:06:05.979 18:09:14 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:05.979 18:09:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:10.173 bf815464-a8fd-4e8b-91cb-b49833b89029 00:06:10.174 18:09:18 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:06:10.174 18:09:18 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:06:10.174 18:09:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:06:10.174 18:09:18 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:06:10.174 18:09:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:06:10.433 18:09:18 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:10.433 18:09:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:10.692 18:09:19 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:10.692 18:09:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:10.692 18:09:19 json_config -- json_config/json_config.sh@161 -- # [[ 1 -eq 1 ]] 00:06:10.692 18:09:19 json_config -- json_config/json_config.sh@162 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:10.692 18:09:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:06:10.950 MallocForCryptoBdev 00:06:10.950 18:09:19 json_config -- json_config/json_config.sh@163 -- # lspci -d:37c8 00:06:10.950 18:09:19 json_config -- json_config/json_config.sh@163 -- # wc -l 00:06:10.950 18:09:19 json_config -- json_config/json_config.sh@163 -- # [[ 5 -eq 0 ]] 00:06:10.950 18:09:19 json_config -- json_config/json_config.sh@166 -- # local crypto_driver=crypto_qat 00:06:10.950 18:09:19 json_config -- json_config/json_config.sh@169 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:10.950 18:09:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:11.210 [2024-07-24 18:09:19.592513] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:06:11.210 CryptoMallocBdev 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@173 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:ed366809-cdbc-45b4-9f4c-33ee09e669c6 bdev_register:7339409e-5bb0-4c85-956d-0306599324c9 bdev_register:9d0cc136-fa4f-4b48-a6e1-8c4fb8c53d11 bdev_register:2fb486d0-1275-4e83-b15f-a3466c6b12e5 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:ed366809-cdbc-45b4-9f4c-33ee09e669c6 bdev_register:7339409e-5bb0-4c85-956d-0306599324c9 bdev_register:9d0cc136-fa4f-4b48-a6e1-8c4fb8c53d11 bdev_register:2fb486d0-1275-4e83-b15f-a3466c6b12e5 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@75 -- # sort 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@76 -- # sort 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:06:11.210 18:09:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.210 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:ed366809-cdbc-45b4-9f4c-33ee09e669c6 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:7339409e-5bb0-4c85-956d-0306599324c9 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:9d0cc136-fa4f-4b48-a6e1-8c4fb8c53d11 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:2fb486d0-1275-4e83-b15f-a3466c6b12e5 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:MallocForCryptoBdev 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:CryptoMallocBdev 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:2fb486d0-1275-4e83-b15f-a3466c6b12e5 bdev_register:7339409e-5bb0-4c85-956d-0306599324c9 bdev_register:9d0cc136-fa4f-4b48-a6e1-8c4fb8c53d11 bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:ed366809-cdbc-45b4-9f4c-33ee09e669c6 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\f\b\4\8\6\d\0\-\1\2\7\5\-\4\e\8\3\-\b\1\5\f\-\a\3\4\6\6\c\6\b\1\2\e\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\3\3\9\4\0\9\e\-\5\b\b\0\-\4\c\8\5\-\9\5\6\d\-\0\3\0\6\5\9\9\3\2\4\c\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\d\0\c\c\1\3\6\-\f\a\4\f\-\4\b\4\8\-\a\6\e\1\-\8\c\4\f\b\8\c\5\3\d\1\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\d\3\6\6\8\0\9\-\c\d\b\c\-\4\5\b\4\-\9\f\4\c\-\3\3\e\e\0\9\e\6\6\9\c\6\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@90 -- # cat 00:06:11.211 18:09:19 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:2fb486d0-1275-4e83-b15f-a3466c6b12e5 bdev_register:7339409e-5bb0-4c85-956d-0306599324c9 bdev_register:9d0cc136-fa4f-4b48-a6e1-8c4fb8c53d11 bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:ed366809-cdbc-45b4-9f4c-33ee09e669c6 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:06:11.471 Expected events matched: 00:06:11.471 bdev_register:2fb486d0-1275-4e83-b15f-a3466c6b12e5 00:06:11.471 bdev_register:7339409e-5bb0-4c85-956d-0306599324c9 00:06:11.471 bdev_register:9d0cc136-fa4f-4b48-a6e1-8c4fb8c53d11 00:06:11.471 bdev_register:aio_disk 00:06:11.471 bdev_register:CryptoMallocBdev 00:06:11.471 bdev_register:ed366809-cdbc-45b4-9f4c-33ee09e669c6 00:06:11.471 bdev_register:Malloc0 00:06:11.471 bdev_register:Malloc0p0 00:06:11.471 bdev_register:Malloc0p1 00:06:11.471 bdev_register:Malloc0p2 00:06:11.471 bdev_register:Malloc1 00:06:11.471 bdev_register:Malloc3 00:06:11.471 bdev_register:MallocForCryptoBdev 00:06:11.471 bdev_register:Null0 00:06:11.471 bdev_register:Nvme0n1 00:06:11.471 bdev_register:Nvme0n1p0 00:06:11.471 bdev_register:Nvme0n1p1 00:06:11.471 bdev_register:PTBdevFromMalloc3 00:06:11.471 18:09:19 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:06:11.471 18:09:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:11.471 18:09:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.471 18:09:19 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:11.471 18:09:19 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:11.471 18:09:19 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:06:11.471 18:09:19 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:11.471 18:09:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:11.471 18:09:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.471 18:09:19 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:11.471 18:09:19 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:11.471 18:09:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:11.471 MallocBdevForConfigChangeCheck 00:06:11.730 18:09:20 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:11.730 18:09:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:11.730 18:09:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.730 18:09:20 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:11.730 18:09:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.990 18:09:20 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:11.990 INFO: shutting down applications... 00:06:11.990 18:09:20 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:11.990 18:09:20 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:11.990 18:09:20 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:11.990 18:09:20 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:11.990 [2024-07-24 18:09:20.571320] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:06:15.281 Calling clear_iscsi_subsystem 00:06:15.281 Calling clear_nvmf_subsystem 00:06:15.281 Calling clear_nbd_subsystem 00:06:15.281 Calling clear_ublk_subsystem 00:06:15.281 Calling clear_vhost_blk_subsystem 00:06:15.281 Calling clear_vhost_scsi_subsystem 00:06:15.281 Calling clear_bdev_subsystem 00:06:15.281 18:09:23 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:06:15.281 18:09:23 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:15.281 18:09:23 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:15.281 18:09:23 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.281 18:09:23 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:15.281 18:09:23 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:15.281 18:09:23 json_config -- json_config/json_config.sh@349 -- # break 00:06:15.281 18:09:23 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:15.281 18:09:23 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:15.281 18:09:23 json_config -- json_config/common.sh@31 -- # local app=target 00:06:15.281 18:09:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:15.281 18:09:23 json_config -- json_config/common.sh@35 -- # [[ -n 2105944 ]] 00:06:15.281 18:09:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2105944 00:06:15.281 18:09:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:15.281 18:09:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.281 18:09:23 json_config -- json_config/common.sh@41 -- # kill -0 2105944 00:06:15.281 18:09:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.541 18:09:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.541 18:09:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.541 18:09:23 json_config -- json_config/common.sh@41 -- # kill -0 2105944 00:06:15.541 18:09:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:15.541 18:09:23 json_config -- json_config/common.sh@43 -- # break 00:06:15.541 18:09:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:15.541 18:09:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:15.541 SPDK target shutdown done 00:06:15.541 18:09:23 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:15.541 INFO: relaunching applications... 00:06:15.541 18:09:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.541 18:09:23 json_config -- json_config/common.sh@9 -- # local app=target 00:06:15.541 18:09:23 json_config -- json_config/common.sh@10 -- # shift 00:06:15.541 18:09:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:15.541 18:09:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:15.541 18:09:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:15.541 18:09:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.541 18:09:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.541 18:09:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2109690 00:06:15.541 18:09:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:15.541 Waiting for target to run... 00:06:15.541 18:09:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.541 18:09:23 json_config -- json_config/common.sh@25 -- # waitforlisten 2109690 /var/tmp/spdk_tgt.sock 00:06:15.541 18:09:23 json_config -- common/autotest_common.sh@831 -- # '[' -z 2109690 ']' 00:06:15.541 18:09:23 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:15.541 18:09:23 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.541 18:09:23 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:15.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:15.541 18:09:23 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.541 18:09:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.541 [2024-07-24 18:09:24.009825] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:06:15.541 [2024-07-24 18:09:24.009895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2109690 ] 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:01.0 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:01.1 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:01.2 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:01.3 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:01.4 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:01.5 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:01.6 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:01.7 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:02.0 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:02.1 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:02.2 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:02.3 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:02.4 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:02.5 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:02.6 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b3:02.7 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b5:01.0 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b5:01.1 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b5:01.2 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b5:01.3 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b5:01.4 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b5:01.5 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b5:01.6 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b5:01.7 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b5:02.0 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.112 EAL: Requested device 0000:b5:02.1 cannot be used 00:06:16.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.113 EAL: Requested device 0000:b5:02.2 cannot be used 00:06:16.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.113 EAL: Requested device 0000:b5:02.3 cannot be used 00:06:16.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.113 EAL: Requested device 0000:b5:02.4 cannot be used 00:06:16.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.113 EAL: Requested device 0000:b5:02.5 cannot be used 00:06:16.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.113 EAL: Requested device 0000:b5:02.6 cannot be used 00:06:16.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.113 EAL: Requested device 0000:b5:02.7 cannot be used 00:06:16.113 [2024-07-24 18:09:24.469676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.113 [2024-07-24 18:09:24.551492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.113 [2024-07-24 18:09:24.604922] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:06:16.113 [2024-07-24 18:09:24.612952] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:06:16.113 [2024-07-24 18:09:24.620969] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:06:16.113 [2024-07-24 18:09:24.700424] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:06:18.740 [2024-07-24 18:09:26.844287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:18.740 [2024-07-24 18:09:26.844334] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:18.740 [2024-07-24 18:09:26.844344] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:18.740 [2024-07-24 18:09:26.852305] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:18.740 [2024-07-24 18:09:26.852325] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:18.740 [2024-07-24 18:09:26.860320] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:18.740 [2024-07-24 18:09:26.860335] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:18.740 [2024-07-24 18:09:26.868350] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:06:18.740 [2024-07-24 18:09:26.868369] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:06:18.740 [2024-07-24 18:09:26.868377] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:21.275 [2024-07-24 18:09:29.746542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:21.275 [2024-07-24 18:09:29.746582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.276 [2024-07-24 18:09:29.746594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a83640 00:06:21.276 [2024-07-24 18:09:29.746603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.276 [2024-07-24 18:09:29.746815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.276 [2024-07-24 18:09:29.746828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:21.276 18:09:29 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.276 18:09:29 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:21.276 18:09:29 json_config -- json_config/common.sh@26 -- # echo '' 00:06:21.276 00:06:21.276 18:09:29 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:21.276 18:09:29 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:21.276 INFO: Checking if target configuration is the same... 00:06:21.276 18:09:29 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.535 18:09:29 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:21.535 18:09:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.535 + '[' 2 -ne 2 ']' 00:06:21.535 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:21.535 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:21.535 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:21.535 +++ basename /dev/fd/62 00:06:21.535 ++ mktemp /tmp/62.XXX 00:06:21.535 + tmp_file_1=/tmp/62.9Ql 00:06:21.535 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.535 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:21.535 + tmp_file_2=/tmp/spdk_tgt_config.json.IFZ 00:06:21.535 + ret=0 00:06:21.535 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:21.795 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:21.795 + diff -u /tmp/62.9Ql /tmp/spdk_tgt_config.json.IFZ 00:06:21.795 + echo 'INFO: JSON config files are the same' 00:06:21.795 INFO: JSON config files are the same 00:06:21.795 + rm /tmp/62.9Ql /tmp/spdk_tgt_config.json.IFZ 00:06:21.795 + exit 0 00:06:21.795 18:09:30 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:21.795 18:09:30 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:21.795 INFO: changing configuration and checking if this can be detected... 00:06:21.795 18:09:30 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:21.795 18:09:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:22.056 18:09:30 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:22.056 18:09:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.056 18:09:30 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.056 + '[' 2 -ne 2 ']' 00:06:22.056 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:22.056 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:22.056 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:22.056 +++ basename /dev/fd/62 00:06:22.056 ++ mktemp /tmp/62.XXX 00:06:22.056 + tmp_file_1=/tmp/62.M7u 00:06:22.056 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.056 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.056 + tmp_file_2=/tmp/spdk_tgt_config.json.u4l 00:06:22.056 + ret=0 00:06:22.056 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.315 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.315 + diff -u /tmp/62.M7u /tmp/spdk_tgt_config.json.u4l 00:06:22.315 + ret=1 00:06:22.315 + echo '=== Start of file: /tmp/62.M7u ===' 00:06:22.315 + cat /tmp/62.M7u 00:06:22.315 + echo '=== End of file: /tmp/62.M7u ===' 00:06:22.315 + echo '' 00:06:22.315 + echo '=== Start of file: /tmp/spdk_tgt_config.json.u4l ===' 00:06:22.315 + cat /tmp/spdk_tgt_config.json.u4l 00:06:22.315 + echo '=== End of file: /tmp/spdk_tgt_config.json.u4l ===' 00:06:22.315 + echo '' 00:06:22.315 + rm /tmp/62.M7u /tmp/spdk_tgt_config.json.u4l 00:06:22.315 + exit 1 00:06:22.315 18:09:30 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:22.315 INFO: configuration change detected. 00:06:22.315 18:09:30 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:22.315 18:09:30 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:22.316 18:09:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.316 18:09:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.316 18:09:30 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:22.316 18:09:30 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:22.316 18:09:30 json_config -- json_config/json_config.sh@321 -- # [[ -n 2109690 ]] 00:06:22.316 18:09:30 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:22.316 18:09:30 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:22.316 18:09:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.316 18:09:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.316 18:09:30 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:06:22.316 18:09:30 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:06:22.316 18:09:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:06:22.575 18:09:30 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:06:22.575 18:09:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:06:22.575 18:09:31 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:06:22.575 18:09:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:06:22.834 18:09:31 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:06:22.834 18:09:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:06:23.093 18:09:31 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:23.093 18:09:31 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:23.093 18:09:31 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:23.093 18:09:31 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:23.093 18:09:31 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:23.093 18:09:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.093 18:09:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.093 18:09:31 json_config -- json_config/json_config.sh@327 -- # killprocess 2109690 00:06:23.093 18:09:31 json_config -- common/autotest_common.sh@950 -- # '[' -z 2109690 ']' 00:06:23.093 18:09:31 json_config -- common/autotest_common.sh@954 -- # kill -0 2109690 00:06:23.093 18:09:31 json_config -- common/autotest_common.sh@955 -- # uname 00:06:23.093 18:09:31 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.093 18:09:31 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2109690 00:06:23.093 18:09:31 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.093 18:09:31 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.093 18:09:31 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2109690' 00:06:23.093 killing process with pid 2109690 00:06:23.093 18:09:31 json_config -- common/autotest_common.sh@969 -- # kill 2109690 00:06:23.093 18:09:31 json_config -- common/autotest_common.sh@974 -- # wait 2109690 00:06:25.631 18:09:34 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:25.631 18:09:34 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:25.631 18:09:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.631 18:09:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.890 18:09:34 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:25.890 18:09:34 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:25.890 INFO: Success 00:06:25.890 00:06:25.890 real 0m28.466s 00:06:25.890 user 0m31.338s 00:06:25.890 sys 0m3.230s 00:06:25.890 18:09:34 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.890 18:09:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.890 ************************************ 00:06:25.890 END TEST json_config 00:06:25.890 ************************************ 00:06:25.890 18:09:34 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:25.890 18:09:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.890 18:09:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.890 18:09:34 -- common/autotest_common.sh@10 -- # set +x 00:06:25.890 ************************************ 00:06:25.890 START TEST json_config_extra_key 00:06:25.890 ************************************ 00:06:25.890 18:09:34 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:25.890 18:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.890 18:09:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:25.890 18:09:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.890 18:09:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.890 18:09:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.890 18:09:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.890 18:09:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.890 18:09:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:06:25.891 18:09:34 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.891 18:09:34 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.891 18:09:34 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.891 18:09:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.891 18:09:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.891 18:09:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.891 18:09:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:25.891 18:09:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.891 18:09:34 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.891 18:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:06:25.891 18:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:25.891 18:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:25.891 18:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:25.891 18:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:25.891 18:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:25.891 18:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:25.891 18:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:25.891 18:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:25.891 18:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:25.891 18:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:25.891 INFO: launching applications... 00:06:25.891 18:09:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.891 18:09:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:25.891 18:09:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:25.891 18:09:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.891 18:09:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.891 18:09:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.891 18:09:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.891 18:09:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.891 18:09:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2111586 00:06:25.891 18:09:34 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:25.891 18:09:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.891 Waiting for target to run... 00:06:25.891 18:09:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2111586 /var/tmp/spdk_tgt.sock 00:06:25.891 18:09:34 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2111586 ']' 00:06:25.891 18:09:34 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.891 18:09:34 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.891 18:09:34 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.891 18:09:34 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.891 18:09:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:26.151 [2024-07-24 18:09:34.491375] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:06:26.151 [2024-07-24 18:09:34.491427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111586 ] 00:06:26.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.409 EAL: Requested device 0000:b3:01.0 cannot be used 00:06:26.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.409 EAL: Requested device 0000:b3:01.1 cannot be used 00:06:26.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.409 EAL: Requested device 0000:b3:01.2 cannot be used 00:06:26.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.409 EAL: Requested device 0000:b3:01.3 cannot be used 00:06:26.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.409 EAL: Requested device 0000:b3:01.4 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b3:01.5 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b3:01.6 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b3:01.7 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b3:02.0 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b3:02.1 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b3:02.2 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b3:02.3 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b3:02.4 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b3:02.5 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b3:02.6 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b3:02.7 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:01.0 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:01.1 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:01.2 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:01.3 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:01.4 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:01.5 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:01.6 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:01.7 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:02.0 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:02.1 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:02.2 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:02.3 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:02.4 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:02.5 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:02.6 cannot be used 00:06:26.410 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:26.410 EAL: Requested device 0000:b5:02.7 cannot be used 00:06:26.410 [2024-07-24 18:09:34.806579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.410 [2024-07-24 18:09:34.868286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.977 18:09:35 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.977 18:09:35 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:26.977 18:09:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:26.977 00:06:26.977 18:09:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:26.977 INFO: shutting down applications... 00:06:26.977 18:09:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:26.977 18:09:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:26.977 18:09:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:26.977 18:09:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2111586 ]] 00:06:26.978 18:09:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2111586 00:06:26.978 18:09:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:26.978 18:09:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.978 18:09:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2111586 00:06:26.978 18:09:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.237 18:09:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.237 18:09:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.237 18:09:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2111586 00:06:27.237 18:09:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:27.237 18:09:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:27.237 18:09:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:27.237 18:09:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:27.237 SPDK target shutdown done 00:06:27.237 18:09:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:27.237 Success 00:06:27.237 00:06:27.237 real 0m1.473s 00:06:27.237 user 0m1.038s 00:06:27.237 sys 0m0.428s 00:06:27.237 18:09:35 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.237 18:09:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:27.237 ************************************ 00:06:27.237 END TEST json_config_extra_key 00:06:27.237 ************************************ 00:06:27.496 18:09:35 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.496 18:09:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.496 18:09:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.496 18:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:27.496 ************************************ 00:06:27.496 START TEST alias_rpc 00:06:27.496 ************************************ 00:06:27.496 18:09:35 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:27.496 * Looking for test storage... 00:06:27.496 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:06:27.496 18:09:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.496 18:09:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2111950 00:06:27.496 18:09:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.496 18:09:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2111950 00:06:27.496 18:09:35 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2111950 ']' 00:06:27.496 18:09:35 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.496 18:09:35 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.496 18:09:35 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.496 18:09:35 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.496 18:09:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.496 [2024-07-24 18:09:36.053090] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:06:27.496 [2024-07-24 18:09:36.053146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111950 ] 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:01.0 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:01.1 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:01.2 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:01.3 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:01.4 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:01.5 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:01.6 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:01.7 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:02.0 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:02.1 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:02.2 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:02.3 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:02.4 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:02.5 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:02.6 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b3:02.7 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:01.0 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:01.1 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:01.2 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:01.3 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:01.4 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:01.5 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:01.6 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:01.7 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:02.0 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:02.1 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:02.2 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:02.3 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:02.4 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:02.5 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:02.6 cannot be used 00:06:27.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:27.756 EAL: Requested device 0000:b5:02.7 cannot be used 00:06:27.756 [2024-07-24 18:09:36.146599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.756 [2024-07-24 18:09:36.215458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.325 18:09:36 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.325 18:09:36 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:28.325 18:09:36 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:28.584 18:09:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2111950 00:06:28.584 18:09:37 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2111950 ']' 00:06:28.584 18:09:37 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2111950 00:06:28.584 18:09:37 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:28.584 18:09:37 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.584 18:09:37 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2111950 00:06:28.584 18:09:37 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.584 18:09:37 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.584 18:09:37 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2111950' 00:06:28.584 killing process with pid 2111950 00:06:28.584 18:09:37 alias_rpc -- common/autotest_common.sh@969 -- # kill 2111950 00:06:28.584 18:09:37 alias_rpc -- common/autotest_common.sh@974 -- # wait 2111950 00:06:28.844 00:06:28.844 real 0m1.505s 00:06:28.844 user 0m1.557s 00:06:28.844 sys 0m0.483s 00:06:28.844 18:09:37 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.844 18:09:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.844 ************************************ 00:06:28.844 END TEST alias_rpc 00:06:28.844 ************************************ 00:06:28.844 18:09:37 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:28.844 18:09:37 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:28.844 18:09:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.844 18:09:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.844 18:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:29.104 ************************************ 00:06:29.104 START TEST spdkcli_tcp 00:06:29.104 ************************************ 00:06:29.104 18:09:37 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:29.104 * Looking for test storage... 00:06:29.104 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:06:29.104 18:09:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:06:29.104 18:09:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:29.104 18:09:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:06:29.104 18:09:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:29.104 18:09:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:29.104 18:09:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:29.104 18:09:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:29.104 18:09:37 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:29.104 18:09:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.104 18:09:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2112299 00:06:29.104 18:09:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2112299 00:06:29.104 18:09:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:29.104 18:09:37 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2112299 ']' 00:06:29.104 18:09:37 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.104 18:09:37 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.104 18:09:37 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.104 18:09:37 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.104 18:09:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.104 [2024-07-24 18:09:37.646073] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:06:29.104 [2024-07-24 18:09:37.646129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112299 ] 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:01.0 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:01.1 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:01.2 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:01.3 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:01.4 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:01.5 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:01.6 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:01.7 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:02.0 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:02.1 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:02.2 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:02.3 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:02.4 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:02.5 cannot be used 00:06:29.104 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.104 EAL: Requested device 0000:b3:02.6 cannot be used 00:06:29.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.105 EAL: Requested device 0000:b3:02.7 cannot be used 00:06:29.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.105 EAL: Requested device 0000:b5:01.0 cannot be used 00:06:29.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.105 EAL: Requested device 0000:b5:01.1 cannot be used 00:06:29.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.105 EAL: Requested device 0000:b5:01.2 cannot be used 00:06:29.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.105 EAL: Requested device 0000:b5:01.3 cannot be used 00:06:29.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.105 EAL: Requested device 0000:b5:01.4 cannot be used 00:06:29.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.105 EAL: Requested device 0000:b5:01.5 cannot be used 00:06:29.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.105 EAL: Requested device 0000:b5:01.6 cannot be used 00:06:29.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.364 EAL: Requested device 0000:b5:01.7 cannot be used 00:06:29.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.364 EAL: Requested device 0000:b5:02.0 cannot be used 00:06:29.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.364 EAL: Requested device 0000:b5:02.1 cannot be used 00:06:29.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.364 EAL: Requested device 0000:b5:02.2 cannot be used 00:06:29.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.364 EAL: Requested device 0000:b5:02.3 cannot be used 00:06:29.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.364 EAL: Requested device 0000:b5:02.4 cannot be used 00:06:29.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.364 EAL: Requested device 0000:b5:02.5 cannot be used 00:06:29.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.364 EAL: Requested device 0000:b5:02.6 cannot be used 00:06:29.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.364 EAL: Requested device 0000:b5:02.7 cannot be used 00:06:29.364 [2024-07-24 18:09:37.740572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.364 [2024-07-24 18:09:37.815929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.364 [2024-07-24 18:09:37.815933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.933 18:09:38 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.933 18:09:38 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:29.933 18:09:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2112359 00:06:29.933 18:09:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:29.933 18:09:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:30.192 [ 00:06:30.192 "bdev_malloc_delete", 00:06:30.192 "bdev_malloc_create", 00:06:30.192 "bdev_null_resize", 00:06:30.192 "bdev_null_delete", 00:06:30.192 "bdev_null_create", 00:06:30.192 "bdev_nvme_cuse_unregister", 00:06:30.192 "bdev_nvme_cuse_register", 00:06:30.192 "bdev_opal_new_user", 00:06:30.192 "bdev_opal_set_lock_state", 00:06:30.192 "bdev_opal_delete", 00:06:30.192 "bdev_opal_get_info", 00:06:30.192 "bdev_opal_create", 00:06:30.192 "bdev_nvme_opal_revert", 00:06:30.192 "bdev_nvme_opal_init", 00:06:30.192 "bdev_nvme_send_cmd", 00:06:30.192 "bdev_nvme_get_path_iostat", 00:06:30.192 "bdev_nvme_get_mdns_discovery_info", 00:06:30.192 "bdev_nvme_stop_mdns_discovery", 00:06:30.192 "bdev_nvme_start_mdns_discovery", 00:06:30.192 "bdev_nvme_set_multipath_policy", 00:06:30.192 "bdev_nvme_set_preferred_path", 00:06:30.192 "bdev_nvme_get_io_paths", 00:06:30.193 "bdev_nvme_remove_error_injection", 00:06:30.193 "bdev_nvme_add_error_injection", 00:06:30.193 "bdev_nvme_get_discovery_info", 00:06:30.193 "bdev_nvme_stop_discovery", 00:06:30.193 "bdev_nvme_start_discovery", 00:06:30.193 "bdev_nvme_get_controller_health_info", 00:06:30.193 "bdev_nvme_disable_controller", 00:06:30.193 "bdev_nvme_enable_controller", 00:06:30.193 "bdev_nvme_reset_controller", 00:06:30.193 "bdev_nvme_get_transport_statistics", 00:06:30.193 "bdev_nvme_apply_firmware", 00:06:30.193 "bdev_nvme_detach_controller", 00:06:30.193 "bdev_nvme_get_controllers", 00:06:30.193 "bdev_nvme_attach_controller", 00:06:30.193 "bdev_nvme_set_hotplug", 00:06:30.193 "bdev_nvme_set_options", 00:06:30.193 "bdev_passthru_delete", 00:06:30.193 "bdev_passthru_create", 00:06:30.193 "bdev_lvol_set_parent_bdev", 00:06:30.193 "bdev_lvol_set_parent", 00:06:30.193 "bdev_lvol_check_shallow_copy", 00:06:30.193 "bdev_lvol_start_shallow_copy", 00:06:30.193 "bdev_lvol_grow_lvstore", 00:06:30.193 "bdev_lvol_get_lvols", 00:06:30.193 "bdev_lvol_get_lvstores", 00:06:30.193 "bdev_lvol_delete", 00:06:30.193 "bdev_lvol_set_read_only", 00:06:30.193 "bdev_lvol_resize", 00:06:30.193 "bdev_lvol_decouple_parent", 00:06:30.193 "bdev_lvol_inflate", 00:06:30.193 "bdev_lvol_rename", 00:06:30.193 "bdev_lvol_clone_bdev", 00:06:30.193 "bdev_lvol_clone", 00:06:30.193 "bdev_lvol_snapshot", 00:06:30.193 "bdev_lvol_create", 00:06:30.193 "bdev_lvol_delete_lvstore", 00:06:30.193 "bdev_lvol_rename_lvstore", 00:06:30.193 "bdev_lvol_create_lvstore", 00:06:30.193 "bdev_raid_set_options", 00:06:30.193 "bdev_raid_remove_base_bdev", 00:06:30.193 "bdev_raid_add_base_bdev", 00:06:30.193 "bdev_raid_delete", 00:06:30.193 "bdev_raid_create", 00:06:30.193 "bdev_raid_get_bdevs", 00:06:30.193 "bdev_error_inject_error", 00:06:30.193 "bdev_error_delete", 00:06:30.193 "bdev_error_create", 00:06:30.193 "bdev_split_delete", 00:06:30.193 "bdev_split_create", 00:06:30.193 "bdev_delay_delete", 00:06:30.193 "bdev_delay_create", 00:06:30.193 "bdev_delay_update_latency", 00:06:30.193 "bdev_zone_block_delete", 00:06:30.193 "bdev_zone_block_create", 00:06:30.193 "blobfs_create", 00:06:30.193 "blobfs_detect", 00:06:30.193 "blobfs_set_cache_size", 00:06:30.193 "bdev_crypto_delete", 00:06:30.193 "bdev_crypto_create", 00:06:30.193 "bdev_compress_delete", 00:06:30.193 "bdev_compress_create", 00:06:30.193 "bdev_compress_get_orphans", 00:06:30.193 "bdev_aio_delete", 00:06:30.193 "bdev_aio_rescan", 00:06:30.193 "bdev_aio_create", 00:06:30.193 "bdev_ftl_set_property", 00:06:30.193 "bdev_ftl_get_properties", 00:06:30.193 "bdev_ftl_get_stats", 00:06:30.193 "bdev_ftl_unmap", 00:06:30.193 "bdev_ftl_unload", 00:06:30.193 "bdev_ftl_delete", 00:06:30.193 "bdev_ftl_load", 00:06:30.193 "bdev_ftl_create", 00:06:30.193 "bdev_virtio_attach_controller", 00:06:30.193 "bdev_virtio_scsi_get_devices", 00:06:30.193 "bdev_virtio_detach_controller", 00:06:30.193 "bdev_virtio_blk_set_hotplug", 00:06:30.193 "bdev_iscsi_delete", 00:06:30.193 "bdev_iscsi_create", 00:06:30.193 "bdev_iscsi_set_options", 00:06:30.193 "accel_error_inject_error", 00:06:30.193 "ioat_scan_accel_module", 00:06:30.193 "dsa_scan_accel_module", 00:06:30.193 "iaa_scan_accel_module", 00:06:30.193 "dpdk_cryptodev_get_driver", 00:06:30.193 "dpdk_cryptodev_set_driver", 00:06:30.193 "dpdk_cryptodev_scan_accel_module", 00:06:30.193 "compressdev_scan_accel_module", 00:06:30.193 "keyring_file_remove_key", 00:06:30.193 "keyring_file_add_key", 00:06:30.193 "keyring_linux_set_options", 00:06:30.193 "iscsi_get_histogram", 00:06:30.193 "iscsi_enable_histogram", 00:06:30.193 "iscsi_set_options", 00:06:30.193 "iscsi_get_auth_groups", 00:06:30.193 "iscsi_auth_group_remove_secret", 00:06:30.193 "iscsi_auth_group_add_secret", 00:06:30.193 "iscsi_delete_auth_group", 00:06:30.193 "iscsi_create_auth_group", 00:06:30.193 "iscsi_set_discovery_auth", 00:06:30.193 "iscsi_get_options", 00:06:30.193 "iscsi_target_node_request_logout", 00:06:30.193 "iscsi_target_node_set_redirect", 00:06:30.193 "iscsi_target_node_set_auth", 00:06:30.193 "iscsi_target_node_add_lun", 00:06:30.193 "iscsi_get_stats", 00:06:30.193 "iscsi_get_connections", 00:06:30.193 "iscsi_portal_group_set_auth", 00:06:30.193 "iscsi_start_portal_group", 00:06:30.193 "iscsi_delete_portal_group", 00:06:30.193 "iscsi_create_portal_group", 00:06:30.193 "iscsi_get_portal_groups", 00:06:30.193 "iscsi_delete_target_node", 00:06:30.193 "iscsi_target_node_remove_pg_ig_maps", 00:06:30.193 "iscsi_target_node_add_pg_ig_maps", 00:06:30.193 "iscsi_create_target_node", 00:06:30.193 "iscsi_get_target_nodes", 00:06:30.193 "iscsi_delete_initiator_group", 00:06:30.193 "iscsi_initiator_group_remove_initiators", 00:06:30.193 "iscsi_initiator_group_add_initiators", 00:06:30.193 "iscsi_create_initiator_group", 00:06:30.193 "iscsi_get_initiator_groups", 00:06:30.193 "nvmf_set_crdt", 00:06:30.193 "nvmf_set_config", 00:06:30.193 "nvmf_set_max_subsystems", 00:06:30.193 "nvmf_stop_mdns_prr", 00:06:30.193 "nvmf_publish_mdns_prr", 00:06:30.193 "nvmf_subsystem_get_listeners", 00:06:30.193 "nvmf_subsystem_get_qpairs", 00:06:30.193 "nvmf_subsystem_get_controllers", 00:06:30.193 "nvmf_get_stats", 00:06:30.193 "nvmf_get_transports", 00:06:30.193 "nvmf_create_transport", 00:06:30.193 "nvmf_get_targets", 00:06:30.193 "nvmf_delete_target", 00:06:30.193 "nvmf_create_target", 00:06:30.193 "nvmf_subsystem_allow_any_host", 00:06:30.193 "nvmf_subsystem_remove_host", 00:06:30.193 "nvmf_subsystem_add_host", 00:06:30.193 "nvmf_ns_remove_host", 00:06:30.193 "nvmf_ns_add_host", 00:06:30.193 "nvmf_subsystem_remove_ns", 00:06:30.193 "nvmf_subsystem_add_ns", 00:06:30.193 "nvmf_subsystem_listener_set_ana_state", 00:06:30.193 "nvmf_discovery_get_referrals", 00:06:30.193 "nvmf_discovery_remove_referral", 00:06:30.193 "nvmf_discovery_add_referral", 00:06:30.193 "nvmf_subsystem_remove_listener", 00:06:30.193 "nvmf_subsystem_add_listener", 00:06:30.193 "nvmf_delete_subsystem", 00:06:30.193 "nvmf_create_subsystem", 00:06:30.193 "nvmf_get_subsystems", 00:06:30.193 "env_dpdk_get_mem_stats", 00:06:30.193 "nbd_get_disks", 00:06:30.193 "nbd_stop_disk", 00:06:30.193 "nbd_start_disk", 00:06:30.193 "ublk_recover_disk", 00:06:30.193 "ublk_get_disks", 00:06:30.193 "ublk_stop_disk", 00:06:30.193 "ublk_start_disk", 00:06:30.193 "ublk_destroy_target", 00:06:30.193 "ublk_create_target", 00:06:30.193 "virtio_blk_create_transport", 00:06:30.193 "virtio_blk_get_transports", 00:06:30.193 "vhost_controller_set_coalescing", 00:06:30.193 "vhost_get_controllers", 00:06:30.193 "vhost_delete_controller", 00:06:30.193 "vhost_create_blk_controller", 00:06:30.193 "vhost_scsi_controller_remove_target", 00:06:30.193 "vhost_scsi_controller_add_target", 00:06:30.193 "vhost_start_scsi_controller", 00:06:30.193 "vhost_create_scsi_controller", 00:06:30.193 "thread_set_cpumask", 00:06:30.193 "framework_get_governor", 00:06:30.193 "framework_get_scheduler", 00:06:30.193 "framework_set_scheduler", 00:06:30.193 "framework_get_reactors", 00:06:30.193 "thread_get_io_channels", 00:06:30.193 "thread_get_pollers", 00:06:30.193 "thread_get_stats", 00:06:30.193 "framework_monitor_context_switch", 00:06:30.193 "spdk_kill_instance", 00:06:30.193 "log_enable_timestamps", 00:06:30.193 "log_get_flags", 00:06:30.193 "log_clear_flag", 00:06:30.193 "log_set_flag", 00:06:30.193 "log_get_level", 00:06:30.193 "log_set_level", 00:06:30.193 "log_get_print_level", 00:06:30.193 "log_set_print_level", 00:06:30.193 "framework_enable_cpumask_locks", 00:06:30.193 "framework_disable_cpumask_locks", 00:06:30.193 "framework_wait_init", 00:06:30.193 "framework_start_init", 00:06:30.193 "scsi_get_devices", 00:06:30.193 "bdev_get_histogram", 00:06:30.193 "bdev_enable_histogram", 00:06:30.193 "bdev_set_qos_limit", 00:06:30.193 "bdev_set_qd_sampling_period", 00:06:30.193 "bdev_get_bdevs", 00:06:30.193 "bdev_reset_iostat", 00:06:30.193 "bdev_get_iostat", 00:06:30.193 "bdev_examine", 00:06:30.193 "bdev_wait_for_examine", 00:06:30.193 "bdev_set_options", 00:06:30.193 "notify_get_notifications", 00:06:30.193 "notify_get_types", 00:06:30.193 "accel_get_stats", 00:06:30.193 "accel_set_options", 00:06:30.193 "accel_set_driver", 00:06:30.193 "accel_crypto_key_destroy", 00:06:30.193 "accel_crypto_keys_get", 00:06:30.193 "accel_crypto_key_create", 00:06:30.193 "accel_assign_opc", 00:06:30.193 "accel_get_module_info", 00:06:30.193 "accel_get_opc_assignments", 00:06:30.193 "vmd_rescan", 00:06:30.193 "vmd_remove_device", 00:06:30.193 "vmd_enable", 00:06:30.193 "sock_get_default_impl", 00:06:30.193 "sock_set_default_impl", 00:06:30.193 "sock_impl_set_options", 00:06:30.193 "sock_impl_get_options", 00:06:30.193 "iobuf_get_stats", 00:06:30.193 "iobuf_set_options", 00:06:30.193 "framework_get_pci_devices", 00:06:30.193 "framework_get_config", 00:06:30.194 "framework_get_subsystems", 00:06:30.194 "trace_get_info", 00:06:30.194 "trace_get_tpoint_group_mask", 00:06:30.194 "trace_disable_tpoint_group", 00:06:30.194 "trace_enable_tpoint_group", 00:06:30.194 "trace_clear_tpoint_mask", 00:06:30.194 "trace_set_tpoint_mask", 00:06:30.194 "keyring_get_keys", 00:06:30.194 "spdk_get_version", 00:06:30.194 "rpc_get_methods" 00:06:30.194 ] 00:06:30.194 18:09:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:30.194 18:09:38 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.194 18:09:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.194 18:09:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:30.194 18:09:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2112299 00:06:30.194 18:09:38 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2112299 ']' 00:06:30.194 18:09:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2112299 00:06:30.194 18:09:38 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:30.194 18:09:38 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.194 18:09:38 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2112299 00:06:30.194 18:09:38 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.194 18:09:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.194 18:09:38 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2112299' 00:06:30.194 killing process with pid 2112299 00:06:30.194 18:09:38 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2112299 00:06:30.194 18:09:38 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2112299 00:06:30.452 00:06:30.452 real 0m1.560s 00:06:30.452 user 0m2.767s 00:06:30.452 sys 0m0.538s 00:06:30.453 18:09:39 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.453 18:09:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.453 ************************************ 00:06:30.453 END TEST spdkcli_tcp 00:06:30.453 ************************************ 00:06:30.711 18:09:39 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:30.711 18:09:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.711 18:09:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.711 18:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:30.711 ************************************ 00:06:30.711 START TEST dpdk_mem_utility 00:06:30.711 ************************************ 00:06:30.712 18:09:39 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:30.712 * Looking for test storage... 00:06:30.712 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:06:30.712 18:09:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:30.712 18:09:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2112678 00:06:30.712 18:09:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2112678 00:06:30.712 18:09:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:30.712 18:09:39 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2112678 ']' 00:06:30.712 18:09:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.712 18:09:39 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.712 18:09:39 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.712 18:09:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.712 18:09:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.712 [2024-07-24 18:09:39.281763] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:06:30.712 [2024-07-24 18:09:39.281818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112678 ] 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:01.0 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:01.1 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:01.2 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:01.3 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:01.4 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:01.5 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:01.6 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:01.7 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:02.0 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:02.1 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:02.2 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:02.3 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:02.4 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:02.5 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:02.6 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b3:02.7 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:01.0 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:01.1 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:01.2 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:01.3 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:01.4 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:01.5 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:01.6 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:01.7 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:02.0 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:02.1 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:02.2 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:02.3 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:02.4 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:02.5 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:02.6 cannot be used 00:06:30.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:30.972 EAL: Requested device 0000:b5:02.7 cannot be used 00:06:30.972 [2024-07-24 18:09:39.375834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.972 [2024-07-24 18:09:39.448308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.540 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.540 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:31.540 18:09:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:31.540 18:09:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:31.540 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.540 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:31.540 { 00:06:31.540 "filename": "/tmp/spdk_mem_dump.txt" 00:06:31.540 } 00:06:31.540 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.540 18:09:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:31.804 DPDK memory size 816.000000 MiB in 2 heap(s) 00:06:31.804 2 heaps totaling size 816.000000 MiB 00:06:31.804 size: 814.000000 MiB heap id: 0 00:06:31.804 size: 2.000000 MiB heap id: 1 00:06:31.804 end heaps---------- 00:06:31.804 8 mempools totaling size 598.116089 MiB 00:06:31.804 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:31.804 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:31.804 size: 84.521057 MiB name: bdev_io_2112678 00:06:31.804 size: 51.011292 MiB name: evtpool_2112678 00:06:31.804 size: 50.003479 MiB name: msgpool_2112678 00:06:31.804 size: 21.763794 MiB name: PDU_Pool 00:06:31.804 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:31.804 size: 0.026123 MiB name: Session_Pool 00:06:31.804 end mempools------- 00:06:31.804 201 memzones totaling size 4.176453 MiB 00:06:31.804 size: 1.000366 MiB name: RG_ring_0_2112678 00:06:31.804 size: 1.000366 MiB name: RG_ring_1_2112678 00:06:31.804 size: 1.000366 MiB name: RG_ring_4_2112678 00:06:31.804 size: 1.000366 MiB name: RG_ring_5_2112678 00:06:31.804 size: 0.125366 MiB name: RG_ring_2_2112678 00:06:31.804 size: 0.015991 MiB name: RG_ring_3_2112678 00:06:31.804 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:31.804 size: 0.000305 MiB name: 0000:3d:01.0_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:01.1_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:01.2_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:01.3_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:01.4_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:01.5_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:01.6_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:01.7_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:02.0_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:02.1_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:02.2_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:02.3_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:02.4_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:02.5_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:02.6_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3d:02.7_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:01.0_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:01.1_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:01.2_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:01.3_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:01.4_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:01.5_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:01.6_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:01.7_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:02.0_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:02.1_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:02.2_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:02.3_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:02.4_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:02.5_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:02.6_qat 00:06:31.804 size: 0.000305 MiB name: 0000:3f:02.7_qat 00:06:31.804 size: 0.000305 MiB name: 0000:b1:01.0_qat 00:06:31.804 size: 0.000305 MiB name: 0000:b1:01.1_qat 00:06:31.804 size: 0.000305 MiB name: 0000:b1:01.2_qat 00:06:31.804 size: 0.000305 MiB name: 0000:b1:01.3_qat 00:06:31.804 size: 0.000305 MiB name: 0000:b1:01.4_qat 00:06:31.804 size: 0.000305 MiB name: 0000:b1:01.5_qat 00:06:31.804 size: 0.000305 MiB name: 0000:b1:01.6_qat 00:06:31.804 size: 0.000305 MiB name: 0000:b1:01.7_qat 00:06:31.805 size: 0.000305 MiB name: 0000:b1:02.0_qat 00:06:31.805 size: 0.000305 MiB name: 0000:b1:02.1_qat 00:06:31.805 size: 0.000305 MiB name: 0000:b1:02.2_qat 00:06:31.805 size: 0.000305 MiB name: 0000:b1:02.3_qat 00:06:31.805 size: 0.000305 MiB name: 0000:b1:02.4_qat 00:06:31.805 size: 0.000305 MiB name: 0000:b1:02.5_qat 00:06:31.805 size: 0.000305 MiB name: 0000:b1:02.6_qat 00:06:31.805 size: 0.000305 MiB name: 0000:b1:02.7_qat 00:06:31.805 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_0 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_1 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_2 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_3 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_4 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_5 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_6 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_7 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_8 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_9 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_10 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_11 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_12 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_13 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_14 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_15 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_16 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_17 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_18 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_19 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_20 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_21 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_22 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_23 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_24 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_25 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_26 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_27 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_28 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_29 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_30 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_31 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_32 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_33 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_34 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_35 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_36 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_37 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_38 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_39 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_40 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_41 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_42 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_43 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_44 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_45 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_46 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:31.805 size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:31.805 size: 0.000122 MiB name: rte_compressdev_data_47 00:06:31.805 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:31.805 end memzones------- 00:06:31.805 18:09:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:31.805 heap id: 0 total size: 814.000000 MiB number of busy elements: 591 number of free elements: 14 00:06:31.805 list of free elements. size: 11.801636 MiB 00:06:31.805 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:31.806 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:31.806 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:31.806 element at address: 0x200003e00000 with size: 0.996460 MiB 00:06:31.806 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:31.806 element at address: 0x200013800000 with size: 0.978882 MiB 00:06:31.806 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:31.806 element at address: 0x200019200000 with size: 0.937256 MiB 00:06:31.806 element at address: 0x20001aa00000 with size: 0.577393 MiB 00:06:31.806 element at address: 0x200003a00000 with size: 0.498535 MiB 00:06:31.806 element at address: 0x20000b200000 with size: 0.491272 MiB 00:06:31.806 element at address: 0x200000800000 with size: 0.486694 MiB 00:06:31.806 element at address: 0x200019400000 with size: 0.485840 MiB 00:06:31.806 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:31.806 list of standard malloc elements. size: 199.890076 MiB 00:06:31.806 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:31.806 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:31.806 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:31.806 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:31.806 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:31.806 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:31.806 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:31.806 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:31.806 element at address: 0x200000330b40 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003340c0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x200000337640 with size: 0.004395 MiB 00:06:31.806 element at address: 0x20000033abc0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x20000033e140 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003416c0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x200000344c40 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003481c0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x20000034b740 with size: 0.004395 MiB 00:06:31.806 element at address: 0x20000034ecc0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x200000352240 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003557c0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x200000358d40 with size: 0.004395 MiB 00:06:31.806 element at address: 0x20000035c2c0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x20000035f840 with size: 0.004395 MiB 00:06:31.806 element at address: 0x200000362dc0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x200000366880 with size: 0.004395 MiB 00:06:31.806 element at address: 0x20000036a340 with size: 0.004395 MiB 00:06:31.806 element at address: 0x20000036de00 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003718c0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x200000375380 with size: 0.004395 MiB 00:06:31.806 element at address: 0x200000378e40 with size: 0.004395 MiB 00:06:31.806 element at address: 0x20000037c900 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003803c0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x200000383e80 with size: 0.004395 MiB 00:06:31.806 element at address: 0x200000387940 with size: 0.004395 MiB 00:06:31.806 element at address: 0x20000038b400 with size: 0.004395 MiB 00:06:31.806 element at address: 0x20000038eec0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x200000392980 with size: 0.004395 MiB 00:06:31.806 element at address: 0x200000396440 with size: 0.004395 MiB 00:06:31.806 element at address: 0x200000399f00 with size: 0.004395 MiB 00:06:31.806 element at address: 0x20000039d9c0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003a1480 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003a4f40 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003a8a00 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003ac4c0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003aff80 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003b3a40 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003b7500 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003bafc0 with size: 0.004395 MiB 00:06:31.806 element at address: 0x2000003bea80 with size: 0.004395 MiB 00:06:31.843 element at address: 0x2000003c2540 with size: 0.004395 MiB 00:06:31.843 element at address: 0x2000003c6000 with size: 0.004395 MiB 00:06:31.843 element at address: 0x2000003c9ac0 with size: 0.004395 MiB 00:06:31.843 element at address: 0x2000003cd580 with size: 0.004395 MiB 00:06:31.843 element at address: 0x2000003d1040 with size: 0.004395 MiB 00:06:31.843 element at address: 0x2000003d4b00 with size: 0.004395 MiB 00:06:31.843 element at address: 0x2000003d8d00 with size: 0.004395 MiB 00:06:31.843 element at address: 0x20000032ea40 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000032fac0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000331fc0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000333040 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000335540 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003365c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000338ac0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000339b40 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000033c040 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000033d0c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000033f5c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000340640 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000342b40 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000343bc0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003460c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000347140 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000349640 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000034a6c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000034cbc0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000034dc40 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000350140 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003511c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003536c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000354740 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000356c40 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000357cc0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000035a1c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000035b240 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000035d740 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000035e7c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000360cc0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000361d40 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000364780 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000365800 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000368240 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003692c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000036bd00 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000036cd80 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000036f7c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000370840 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000373280 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000374300 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000376d40 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000377dc0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000037a800 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000037b880 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000037e2c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000037f340 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000381d80 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000382e00 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000385840 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003868c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000389300 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000038a380 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000038cdc0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000038de40 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000390880 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000391900 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000394340 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003953c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000397e00 with size: 0.004028 MiB 00:06:31.844 element at address: 0x200000398e80 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000039b8c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000039c940 with size: 0.004028 MiB 00:06:31.844 element at address: 0x20000039f380 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003a0400 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003a2e40 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003a3ec0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003a6900 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003a7980 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003aa3c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003ab440 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003ade80 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003aef00 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003b1940 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003b29c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003b5400 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003b6480 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003b8ec0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003b9f40 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003bc980 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003bda00 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003c0440 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003c14c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003c3f00 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003c4f80 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003c79c0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003c8a40 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003cb480 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003cc500 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003cef40 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003cffc0 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003d2a00 with size: 0.004028 MiB 00:06:31.844 element at address: 0x2000003d3a80 with size: 0.004028 MiB 00:06:31.845 element at address: 0x2000003d6c00 with size: 0.004028 MiB 00:06:31.845 element at address: 0x2000003d7c80 with size: 0.004028 MiB 00:06:31.845 element at address: 0x200000204200 with size: 0.000305 MiB 00:06:31.845 element at address: 0x200000200000 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002000c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200180 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200240 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200300 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002003c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200480 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200540 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200600 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002006c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200780 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200840 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200900 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002009c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200a80 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200b40 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200c00 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200cc0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200d80 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200e40 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200f00 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000200fc0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201080 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201140 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201200 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002012c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201380 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201440 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201500 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002015c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201680 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201740 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201800 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002018c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201980 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201a40 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201b00 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201bc0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201c80 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201d40 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201e00 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201ec0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000201f80 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202040 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202100 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002021c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202280 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202340 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202400 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002024c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202580 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202640 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202700 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002027c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202880 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202940 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202a00 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202ac0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202b80 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202c40 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202d00 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202dc0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202e80 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000202f40 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203000 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002030c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203180 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203240 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203300 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002033c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203480 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203540 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203600 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002036c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203780 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203840 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203900 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002039c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203a80 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203b40 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203c00 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203cc0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203d80 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203e40 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203f00 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000203fc0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000204080 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000204140 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000204340 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000204400 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002044c0 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000204580 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000204640 with size: 0.000183 MiB 00:06:31.845 element at address: 0x200000204700 with size: 0.000183 MiB 00:06:31.845 element at address: 0x2000002047c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000204880 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000204940 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000204a00 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000204ac0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000204b80 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000204c40 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000204d00 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000204dc0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000204e80 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000204f40 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205000 with size: 0.000183 MiB 00:06:31.846 element at address: 0x2000002050c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205180 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205240 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205300 with size: 0.000183 MiB 00:06:31.846 element at address: 0x2000002053c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205480 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205540 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205600 with size: 0.000183 MiB 00:06:31.846 element at address: 0x2000002056c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205780 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205840 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205900 with size: 0.000183 MiB 00:06:31.846 element at address: 0x2000002059c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205a80 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205b40 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205c00 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205cc0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205d80 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205e40 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205f00 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000205fc0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000206080 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000206140 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000206200 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000206400 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000020a6c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022a980 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022aa40 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022ab00 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022abc0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022ac80 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022ad40 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022ae00 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022aec0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022af80 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022b040 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022b100 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022b1c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022b280 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022b340 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022b400 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022b4c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022b580 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022b640 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022b700 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022b900 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022b9c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022ba80 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022bb40 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022bc00 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022bcc0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022bd80 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022be40 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022bf00 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022bfc0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022c080 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022c140 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022c200 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022c2c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022c380 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022c440 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000022c500 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000032e700 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000032e7c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000331d40 with size: 0.000183 MiB 00:06:31.846 element at address: 0x2000003352c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000338840 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000033bdc0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000033f340 with size: 0.000183 MiB 00:06:31.846 element at address: 0x2000003428c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000345e40 with size: 0.000183 MiB 00:06:31.846 element at address: 0x2000003493c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000034c940 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000034fec0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000353440 with size: 0.000183 MiB 00:06:31.846 element at address: 0x2000003569c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000359f40 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000035d4c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000360a40 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000363fc0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000364180 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000364240 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000364400 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000367a80 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000367c40 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000367d00 with size: 0.000183 MiB 00:06:31.846 element at address: 0x200000367ec0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000036b540 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000036b700 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000036b7c0 with size: 0.000183 MiB 00:06:31.846 element at address: 0x20000036b980 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000036f000 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000036f1c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000036f280 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000036f440 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000372ac0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000372c80 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000372d40 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000372f00 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000376580 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000376740 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000376800 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003769c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000037a040 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000037a200 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000037a2c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000037a480 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000037db00 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000037dcc0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000037dd80 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000037df40 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003815c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000381780 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000381840 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000381a00 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000385080 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000385240 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000385300 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003854c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000388b40 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000388d00 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000388dc0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000388f80 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000038c600 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000038c7c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000038c880 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000038ca40 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003900c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000390280 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000390340 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000390500 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000393b80 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000393d40 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000393e00 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000393fc0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000397640 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000397800 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003978c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x200000397a80 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000039b100 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000039b2c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000039b380 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000039b540 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000039ebc0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000039ed80 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000039ee40 with size: 0.000183 MiB 00:06:31.847 element at address: 0x20000039f000 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003a2680 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003a2840 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003a2900 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003a2ac0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003a6140 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003a6300 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003a63c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003a6580 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003a9c00 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003a9dc0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003a9e80 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003aa040 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003ad6c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003ad880 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003ad940 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003adb00 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003b1180 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003b1340 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003b1400 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003b15c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003b4c40 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003b4e00 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003b4ec0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003b5080 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003b8700 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003b88c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003b8980 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003b8b40 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003bc1c0 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003bc380 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003bc440 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003bc600 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003bfc80 with size: 0.000183 MiB 00:06:31.847 element at address: 0x2000003bfe40 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003bff00 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003c00c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003c3740 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003c3900 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003c39c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003c3b80 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003c7200 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003c73c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003c7480 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003c7640 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003cacc0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003cae80 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003caf40 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003cb100 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003ce780 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003ce940 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003cea00 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003cebc0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003d2240 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003d2400 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003d24c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003d2680 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003d5dc0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003d64c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003d6580 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000003d6880 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:31.848 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:31.848 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:31.848 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:31.848 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:31.848 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:31.848 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:31.848 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:31.848 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:31.848 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:31.848 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:31.848 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:31.849 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:31.849 list of memzone associated elements. size: 602.308289 MiB 00:06:31.849 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:31.849 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:31.849 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:31.849 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:31.849 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:31.849 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2112678_0 00:06:31.849 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:31.849 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2112678_0 00:06:31.849 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:31.849 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2112678_0 00:06:31.849 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:31.849 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:31.849 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:31.849 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:31.849 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:31.849 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2112678 00:06:31.849 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:31.849 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2112678 00:06:31.849 element at address: 0x20000022c5c0 with size: 1.008118 MiB 00:06:31.849 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2112678 00:06:31.849 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:31.849 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:31.849 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:31.849 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:31.849 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:31.849 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:31.849 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:31.850 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:31.850 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:31.850 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2112678 00:06:31.850 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:31.850 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2112678 00:06:31.850 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:31.850 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2112678 00:06:31.850 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:31.850 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2112678 00:06:31.850 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:06:31.850 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2112678 00:06:31.850 element at address: 0x20000b27dc40 with size: 0.500488 MiB 00:06:31.850 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:31.850 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:31.850 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:31.850 element at address: 0x20001947c600 with size: 0.250488 MiB 00:06:31.850 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:31.850 element at address: 0x20000020a780 with size: 0.125488 MiB 00:06:31.850 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2112678 00:06:31.850 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:31.850 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:31.850 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:31.850 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:31.850 element at address: 0x2000002064c0 with size: 0.016113 MiB 00:06:31.850 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2112678 00:06:31.850 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:31.850 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:31.850 element at address: 0x2000003d5f80 with size: 0.001282 MiB 00:06:31.850 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:31.850 element at address: 0x2000003d6a40 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.0_qat 00:06:31.850 element at address: 0x2000003d2840 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.1_qat 00:06:31.850 element at address: 0x2000003ced80 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.2_qat 00:06:31.850 element at address: 0x2000003cb2c0 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.3_qat 00:06:31.850 element at address: 0x2000003c7800 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.4_qat 00:06:31.850 element at address: 0x2000003c3d40 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.5_qat 00:06:31.850 element at address: 0x2000003c0280 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.6_qat 00:06:31.850 element at address: 0x2000003bc7c0 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:01.7_qat 00:06:31.850 element at address: 0x2000003b8d00 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.0_qat 00:06:31.850 element at address: 0x2000003b5240 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.1_qat 00:06:31.850 element at address: 0x2000003b1780 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.2_qat 00:06:31.850 element at address: 0x2000003adcc0 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.3_qat 00:06:31.850 element at address: 0x2000003aa200 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.4_qat 00:06:31.850 element at address: 0x2000003a6740 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.5_qat 00:06:31.850 element at address: 0x2000003a2c80 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.6_qat 00:06:31.850 element at address: 0x20000039f1c0 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3d:02.7_qat 00:06:31.850 element at address: 0x20000039b700 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.0_qat 00:06:31.850 element at address: 0x200000397c40 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.1_qat 00:06:31.850 element at address: 0x200000394180 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.2_qat 00:06:31.850 element at address: 0x2000003906c0 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.3_qat 00:06:31.850 element at address: 0x20000038cc00 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.4_qat 00:06:31.850 element at address: 0x200000389140 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.5_qat 00:06:31.850 element at address: 0x200000385680 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.6_qat 00:06:31.850 element at address: 0x200000381bc0 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:01.7_qat 00:06:31.850 element at address: 0x20000037e100 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.0_qat 00:06:31.850 element at address: 0x20000037a640 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.1_qat 00:06:31.850 element at address: 0x200000376b80 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.2_qat 00:06:31.850 element at address: 0x2000003730c0 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.3_qat 00:06:31.850 element at address: 0x20000036f600 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.4_qat 00:06:31.850 element at address: 0x20000036bb40 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.5_qat 00:06:31.850 element at address: 0x200000368080 with size: 0.000427 MiB 00:06:31.850 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.6_qat 00:06:31.851 element at address: 0x2000003645c0 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:3f:02.7_qat 00:06:31.851 element at address: 0x200000360b00 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:01.0_qat 00:06:31.851 element at address: 0x20000035d580 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:01.1_qat 00:06:31.851 element at address: 0x20000035a000 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:01.2_qat 00:06:31.851 element at address: 0x200000356a80 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:01.3_qat 00:06:31.851 element at address: 0x200000353500 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:01.4_qat 00:06:31.851 element at address: 0x20000034ff80 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:01.5_qat 00:06:31.851 element at address: 0x20000034ca00 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:01.6_qat 00:06:31.851 element at address: 0x200000349480 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:01.7_qat 00:06:31.851 element at address: 0x200000345f00 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:02.0_qat 00:06:31.851 element at address: 0x200000342980 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:02.1_qat 00:06:31.851 element at address: 0x20000033f400 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:02.2_qat 00:06:31.851 element at address: 0x20000033be80 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:02.3_qat 00:06:31.851 element at address: 0x200000338900 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:02.4_qat 00:06:31.851 element at address: 0x200000335380 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:02.5_qat 00:06:31.851 element at address: 0x200000331e00 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:02.6_qat 00:06:31.851 element at address: 0x20000032e880 with size: 0.000427 MiB 00:06:31.851 associated memzone info: size: 0.000305 MiB name: 0000:b1:02.7_qat 00:06:31.851 element at address: 0x2000003d6740 with size: 0.000305 MiB 00:06:31.851 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:31.851 element at address: 0x20000022b7c0 with size: 0.000305 MiB 00:06:31.851 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2112678 00:06:31.851 element at address: 0x2000002062c0 with size: 0.000305 MiB 00:06:31.851 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2112678 00:06:31.851 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:31.851 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:31.851 element at address: 0x2000003d6940 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:31.851 element at address: 0x2000003d6640 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:31.851 element at address: 0x2000003d5e80 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:06:31.851 element at address: 0x2000003d2740 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:31.851 element at address: 0x2000003d2580 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:31.851 element at address: 0x2000003d2300 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:06:31.851 element at address: 0x2000003cec80 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:31.851 element at address: 0x2000003ceac0 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:31.851 element at address: 0x2000003ce840 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:06:31.851 element at address: 0x2000003cb1c0 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:31.851 element at address: 0x2000003cb000 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:31.851 element at address: 0x2000003cad80 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:06:31.851 element at address: 0x2000003c7700 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:31.851 element at address: 0x2000003c7540 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:31.851 element at address: 0x2000003c72c0 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:06:31.851 element at address: 0x2000003c3c40 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:31.851 element at address: 0x2000003c3a80 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:31.851 element at address: 0x2000003c3800 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:06:31.851 element at address: 0x2000003c0180 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:31.851 element at address: 0x2000003bffc0 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:31.851 element at address: 0x2000003bfd40 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:06:31.851 element at address: 0x2000003bc6c0 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:31.851 element at address: 0x2000003bc500 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:31.851 element at address: 0x2000003bc280 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:06:31.851 element at address: 0x2000003b8c00 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:31.851 element at address: 0x2000003b8a40 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:31.851 element at address: 0x2000003b87c0 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:06:31.851 element at address: 0x2000003b5140 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:31.851 element at address: 0x2000003b4f80 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:31.851 element at address: 0x2000003b4d00 with size: 0.000244 MiB 00:06:31.851 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:06:31.851 element at address: 0x2000003b1680 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:31.852 element at address: 0x2000003b14c0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:31.852 element at address: 0x2000003b1240 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:06:31.852 element at address: 0x2000003adbc0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:31.852 element at address: 0x2000003ada00 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:31.852 element at address: 0x2000003ad780 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:06:31.852 element at address: 0x2000003aa100 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:31.852 element at address: 0x2000003a9f40 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:31.852 element at address: 0x2000003a9cc0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:06:31.852 element at address: 0x2000003a6640 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:31.852 element at address: 0x2000003a6480 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:31.852 element at address: 0x2000003a6200 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:06:31.852 element at address: 0x2000003a2b80 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:31.852 element at address: 0x2000003a29c0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:31.852 element at address: 0x2000003a2740 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:06:31.852 element at address: 0x20000039f0c0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:31.852 element at address: 0x20000039ef00 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:31.852 element at address: 0x20000039ec80 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:06:31.852 element at address: 0x20000039b600 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:31.852 element at address: 0x20000039b440 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:31.852 element at address: 0x20000039b1c0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:06:31.852 element at address: 0x200000397b40 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:31.852 element at address: 0x200000397980 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:31.852 element at address: 0x200000397700 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:06:31.852 element at address: 0x200000394080 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:31.852 element at address: 0x200000393ec0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:31.852 element at address: 0x200000393c40 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:06:31.852 element at address: 0x2000003905c0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:31.852 element at address: 0x200000390400 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:31.852 element at address: 0x200000390180 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:06:31.852 element at address: 0x20000038cb00 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:31.852 element at address: 0x20000038c940 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:31.852 element at address: 0x20000038c6c0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:06:31.852 element at address: 0x200000389040 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:31.852 element at address: 0x200000388e80 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:31.852 element at address: 0x200000388c00 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:06:31.852 element at address: 0x200000385580 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:31.852 element at address: 0x2000003853c0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:31.852 element at address: 0x200000385140 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:06:31.852 element at address: 0x200000381ac0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:31.852 element at address: 0x200000381900 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:31.852 element at address: 0x200000381680 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:06:31.852 element at address: 0x20000037e000 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:31.852 element at address: 0x20000037de40 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:31.852 element at address: 0x20000037dbc0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:06:31.852 element at address: 0x20000037a540 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:31.852 element at address: 0x20000037a380 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:31.852 element at address: 0x20000037a100 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:06:31.852 element at address: 0x200000376a80 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:31.852 element at address: 0x2000003768c0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:31.852 element at address: 0x200000376640 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:06:31.852 element at address: 0x200000372fc0 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:31.852 element at address: 0x200000372e00 with size: 0.000244 MiB 00:06:31.852 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:31.853 element at address: 0x200000372b80 with size: 0.000244 MiB 00:06:31.853 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:06:31.853 element at address: 0x20000036f500 with size: 0.000244 MiB 00:06:31.853 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:31.853 element at address: 0x20000036f340 with size: 0.000244 MiB 00:06:31.853 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:31.853 element at address: 0x20000036f0c0 with size: 0.000244 MiB 00:06:31.853 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:06:31.853 element at address: 0x20000036ba40 with size: 0.000244 MiB 00:06:31.853 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:31.853 element at address: 0x20000036b880 with size: 0.000244 MiB 00:06:31.853 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:31.853 element at address: 0x20000036b600 with size: 0.000244 MiB 00:06:31.853 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:06:31.853 element at address: 0x200000367f80 with size: 0.000244 MiB 00:06:31.853 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:31.853 element at address: 0x200000367dc0 with size: 0.000244 MiB 00:06:31.853 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:31.853 element at address: 0x200000367b40 with size: 0.000244 MiB 00:06:31.853 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:06:31.853 element at address: 0x2000003644c0 with size: 0.000244 MiB 00:06:31.853 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:31.853 element at address: 0x200000364300 with size: 0.000244 MiB 00:06:31.853 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:31.853 element at address: 0x200000364080 with size: 0.000244 MiB 00:06:31.853 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:06:31.853 element at address: 0x2000003d5d00 with size: 0.000183 MiB 00:06:31.853 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:31.853 18:09:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:31.853 18:09:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2112678 00:06:31.853 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2112678 ']' 00:06:31.853 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2112678 00:06:31.853 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:31.853 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.853 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2112678 00:06:31.853 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.853 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.853 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2112678' 00:06:31.853 killing process with pid 2112678 00:06:31.853 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2112678 00:06:31.853 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2112678 00:06:32.113 00:06:32.113 real 0m1.498s 00:06:32.113 user 0m1.541s 00:06:32.113 sys 0m0.507s 00:06:32.113 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.113 18:09:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:32.113 ************************************ 00:06:32.113 END TEST dpdk_mem_utility 00:06:32.113 ************************************ 00:06:32.113 18:09:40 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:32.113 18:09:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.113 18:09:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.113 18:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:32.113 ************************************ 00:06:32.113 START TEST event 00:06:32.113 ************************************ 00:06:32.113 18:09:40 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:32.373 * Looking for test storage... 00:06:32.373 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:06:32.373 18:09:40 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:32.373 18:09:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:32.373 18:09:40 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:32.373 18:09:40 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:32.373 18:09:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.373 18:09:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.373 ************************************ 00:06:32.373 START TEST event_perf 00:06:32.373 ************************************ 00:06:32.373 18:09:40 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:32.373 Running I/O for 1 seconds...[2024-07-24 18:09:40.864765] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:06:32.373 [2024-07-24 18:09:40.864816] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112997 ] 00:06:32.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.373 EAL: Requested device 0000:b3:01.0 cannot be used 00:06:32.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.373 EAL: Requested device 0000:b3:01.1 cannot be used 00:06:32.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.373 EAL: Requested device 0000:b3:01.2 cannot be used 00:06:32.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.373 EAL: Requested device 0000:b3:01.3 cannot be used 00:06:32.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.373 EAL: Requested device 0000:b3:01.4 cannot be used 00:06:32.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.373 EAL: Requested device 0000:b3:01.5 cannot be used 00:06:32.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.373 EAL: Requested device 0000:b3:01.6 cannot be used 00:06:32.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.373 EAL: Requested device 0000:b3:01.7 cannot be used 00:06:32.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.373 EAL: Requested device 0000:b3:02.0 cannot be used 00:06:32.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.373 EAL: Requested device 0000:b3:02.1 cannot be used 00:06:32.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.373 EAL: Requested device 0000:b3:02.2 cannot be used 00:06:32.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b3:02.3 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b3:02.4 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b3:02.5 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b3:02.6 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b3:02.7 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:01.0 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:01.1 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:01.2 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:01.3 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:01.4 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:01.5 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:01.6 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:01.7 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:02.0 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:02.1 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:02.2 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:02.3 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:02.4 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:02.5 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:02.6 cannot be used 00:06:32.374 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:32.374 EAL: Requested device 0000:b5:02.7 cannot be used 00:06:32.374 [2024-07-24 18:09:40.958572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.633 [2024-07-24 18:09:41.032938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.633 [2024-07-24 18:09:41.033035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.633 [2024-07-24 18:09:41.033133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.633 [2024-07-24 18:09:41.033135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.634 Running I/O for 1 seconds... 00:06:33.634 lcore 0: 220044 00:06:33.634 lcore 1: 220043 00:06:33.634 lcore 2: 220043 00:06:33.634 lcore 3: 220042 00:06:33.634 done. 00:06:33.634 00:06:33.634 real 0m1.254s 00:06:33.634 user 0m4.147s 00:06:33.634 sys 0m0.105s 00:06:33.634 18:09:42 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.634 18:09:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.634 ************************************ 00:06:33.634 END TEST event_perf 00:06:33.634 ************************************ 00:06:33.634 18:09:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:33.634 18:09:42 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:33.634 18:09:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.634 18:09:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.634 ************************************ 00:06:33.634 START TEST event_reactor 00:06:33.634 ************************************ 00:06:33.634 18:09:42 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:33.634 [2024-07-24 18:09:42.214459] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:06:33.634 [2024-07-24 18:09:42.214520] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2113233 ] 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:01.0 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:01.1 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:01.2 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:01.3 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:01.4 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:01.5 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:01.6 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:01.7 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:02.0 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:02.1 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:02.2 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:02.3 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:02.4 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:02.5 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:02.6 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.893 EAL: Requested device 0000:b3:02.7 cannot be used 00:06:33.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:01.0 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:01.1 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:01.2 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:01.3 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:01.4 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:01.5 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:01.6 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:01.7 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:02.0 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:02.1 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:02.2 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:02.3 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:02.4 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:02.5 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:02.6 cannot be used 00:06:33.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:33.894 EAL: Requested device 0000:b5:02.7 cannot be used 00:06:33.894 [2024-07-24 18:09:42.310654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.894 [2024-07-24 18:09:42.379932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.272 test_start 00:06:35.272 oneshot 00:06:35.272 tick 100 00:06:35.272 tick 100 00:06:35.272 tick 250 00:06:35.272 tick 100 00:06:35.272 tick 100 00:06:35.272 tick 100 00:06:35.272 tick 250 00:06:35.272 tick 500 00:06:35.272 tick 100 00:06:35.272 tick 100 00:06:35.272 tick 250 00:06:35.272 tick 100 00:06:35.272 tick 100 00:06:35.272 test_end 00:06:35.272 00:06:35.272 real 0m1.256s 00:06:35.272 user 0m1.134s 00:06:35.272 sys 0m0.118s 00:06:35.272 18:09:43 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.272 18:09:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:35.272 ************************************ 00:06:35.272 END TEST event_reactor 00:06:35.272 ************************************ 00:06:35.272 18:09:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:35.272 18:09:43 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:35.272 18:09:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.272 18:09:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.272 ************************************ 00:06:35.272 START TEST event_reactor_perf 00:06:35.272 ************************************ 00:06:35.272 18:09:43 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:35.272 [2024-07-24 18:09:43.557302] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:06:35.272 [2024-07-24 18:09:43.557361] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2113425 ] 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:01.0 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:01.1 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:01.2 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:01.3 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:01.4 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:01.5 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:01.6 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:01.7 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:02.0 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:02.1 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:02.2 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:02.3 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:02.4 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:02.5 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:02.6 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b3:02.7 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:01.0 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:01.1 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:01.2 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:01.3 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:01.4 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:01.5 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:01.6 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:01.7 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:02.0 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:02.1 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:02.2 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:02.3 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:02.4 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:02.5 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:02.6 cannot be used 00:06:35.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:35.272 EAL: Requested device 0000:b5:02.7 cannot be used 00:06:35.272 [2024-07-24 18:09:43.652621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.272 [2024-07-24 18:09:43.722228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.209 test_start 00:06:36.209 test_end 00:06:36.209 Performance: 536860 events per second 00:06:36.209 00:06:36.209 real 0m1.256s 00:06:36.209 user 0m1.140s 00:06:36.209 sys 0m0.112s 00:06:36.209 18:09:44 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.209 18:09:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.209 ************************************ 00:06:36.209 END TEST event_reactor_perf 00:06:36.209 ************************************ 00:06:36.469 18:09:44 event -- event/event.sh@49 -- # uname -s 00:06:36.469 18:09:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:36.469 18:09:44 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:36.469 18:09:44 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.469 18:09:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.469 18:09:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.469 ************************************ 00:06:36.469 START TEST event_scheduler 00:06:36.469 ************************************ 00:06:36.469 18:09:44 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:36.469 * Looking for test storage... 00:06:36.469 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:06:36.469 18:09:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:36.469 18:09:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2113703 00:06:36.469 18:09:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:36.469 18:09:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.469 18:09:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2113703 00:06:36.469 18:09:44 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2113703 ']' 00:06:36.469 18:09:44 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.469 18:09:44 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.469 18:09:44 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.469 18:09:44 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.469 18:09:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.469 [2024-07-24 18:09:45.034036] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:06:36.469 [2024-07-24 18:09:45.034084] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2113703 ] 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:01.0 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:01.1 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:01.2 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:01.3 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:01.4 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:01.5 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:01.6 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:01.7 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:02.0 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:02.1 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:02.2 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:02.3 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:02.4 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:02.5 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:02.6 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b3:02.7 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:01.0 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:01.1 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:01.2 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:01.3 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:01.4 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:01.5 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:01.6 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:01.7 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:02.0 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:02.1 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:02.2 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:02.3 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:02.4 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:02.5 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:02.6 cannot be used 00:06:36.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:36.729 EAL: Requested device 0000:b5:02.7 cannot be used 00:06:36.729 [2024-07-24 18:09:45.129080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.729 [2024-07-24 18:09:45.203962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.729 [2024-07-24 18:09:45.204048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.729 [2024-07-24 18:09:45.204144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.729 [2024-07-24 18:09:45.204146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.298 18:09:45 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.298 18:09:45 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:37.298 18:09:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:37.298 18:09:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.298 18:09:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.298 [2024-07-24 18:09:45.842411] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:37.298 [2024-07-24 18:09:45.842432] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:37.298 [2024-07-24 18:09:45.842444] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:37.298 [2024-07-24 18:09:45.842454] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:37.298 [2024-07-24 18:09:45.842462] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:37.298 18:09:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.298 18:09:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:37.298 18:09:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.298 18:09:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.557 [2024-07-24 18:09:45.927701] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:37.557 18:09:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.557 18:09:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:37.557 18:09:45 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.557 18:09:45 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.557 18:09:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.557 ************************************ 00:06:37.557 START TEST scheduler_create_thread 00:06:37.557 ************************************ 00:06:37.557 18:09:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:37.557 18:09:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:37.557 18:09:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.557 18:09:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.557 2 00:06:37.557 18:09:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.557 18:09:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:37.557 18:09:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.557 18:09:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.557 3 00:06:37.557 18:09:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.557 18:09:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:37.557 18:09:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.557 18:09:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.557 4 00:06:37.558 18:09:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.558 18:09:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:37.558 18:09:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.558 18:09:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.558 5 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.558 6 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.558 7 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.558 8 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.558 9 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.558 10 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.558 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.126 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.126 18:09:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:38.126 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.126 18:09:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.505 18:09:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.505 18:09:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:39.505 18:09:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:39.505 18:09:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.505 18:09:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.884 18:09:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.884 00:06:40.884 real 0m3.096s 00:06:40.884 user 0m0.026s 00:06:40.884 sys 0m0.005s 00:06:40.884 18:09:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.884 18:09:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.884 ************************************ 00:06:40.884 END TEST scheduler_create_thread 00:06:40.884 ************************************ 00:06:40.884 18:09:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:40.884 18:09:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2113703 00:06:40.884 18:09:49 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2113703 ']' 00:06:40.884 18:09:49 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2113703 00:06:40.884 18:09:49 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:40.884 18:09:49 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.884 18:09:49 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2113703 00:06:40.884 18:09:49 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:40.884 18:09:49 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:40.884 18:09:49 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2113703' 00:06:40.884 killing process with pid 2113703 00:06:40.884 18:09:49 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2113703 00:06:40.884 18:09:49 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2113703 00:06:40.884 [2024-07-24 18:09:49.442886] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:41.143 00:06:41.143 real 0m4.777s 00:06:41.143 user 0m9.083s 00:06:41.143 sys 0m0.464s 00:06:41.143 18:09:49 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.143 18:09:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.143 ************************************ 00:06:41.143 END TEST event_scheduler 00:06:41.143 ************************************ 00:06:41.143 18:09:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:41.143 18:09:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:41.143 18:09:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.143 18:09:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.143 18:09:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.403 ************************************ 00:06:41.403 START TEST app_repeat 00:06:41.403 ************************************ 00:06:41.403 18:09:49 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2114616 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2114616' 00:06:41.403 Process app_repeat pid: 2114616 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:41.403 spdk_app_start Round 0 00:06:41.403 18:09:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2114616 /var/tmp/spdk-nbd.sock 00:06:41.403 18:09:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2114616 ']' 00:06:41.403 18:09:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.403 18:09:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.403 18:09:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.403 18:09:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.403 18:09:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.403 [2024-07-24 18:09:49.789898] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:06:41.403 [2024-07-24 18:09:49.789958] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2114616 ] 00:06:41.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.403 EAL: Requested device 0000:b3:01.0 cannot be used 00:06:41.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.403 EAL: Requested device 0000:b3:01.1 cannot be used 00:06:41.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.403 EAL: Requested device 0000:b3:01.2 cannot be used 00:06:41.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.403 EAL: Requested device 0000:b3:01.3 cannot be used 00:06:41.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.403 EAL: Requested device 0000:b3:01.4 cannot be used 00:06:41.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.403 EAL: Requested device 0000:b3:01.5 cannot be used 00:06:41.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.403 EAL: Requested device 0000:b3:01.6 cannot be used 00:06:41.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.403 EAL: Requested device 0000:b3:01.7 cannot be used 00:06:41.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.403 EAL: Requested device 0000:b3:02.0 cannot be used 00:06:41.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.403 EAL: Requested device 0000:b3:02.1 cannot be used 00:06:41.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.403 EAL: Requested device 0000:b3:02.2 cannot be used 00:06:41.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.403 EAL: Requested device 0000:b3:02.3 cannot be used 00:06:41.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.403 EAL: Requested device 0000:b3:02.4 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b3:02.5 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b3:02.6 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b3:02.7 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:01.0 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:01.1 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:01.2 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:01.3 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:01.4 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:01.5 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:01.6 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:01.7 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:02.0 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:02.1 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:02.2 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:02.3 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:02.4 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:02.5 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:02.6 cannot be used 00:06:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.404 EAL: Requested device 0000:b5:02.7 cannot be used 00:06:41.404 [2024-07-24 18:09:49.886350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.404 [2024-07-24 18:09:49.960728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.404 [2024-07-24 18:09:49.960733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.342 18:09:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.342 18:09:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:42.342 18:09:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.342 Malloc0 00:06:42.342 18:09:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.342 Malloc1 00:06:42.342 18:09:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.342 18:09:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.342 18:09:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.602 18:09:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:42.602 18:09:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.602 18:09:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:42.602 18:09:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.602 18:09:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.602 18:09:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.602 18:09:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:42.602 18:09:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.602 18:09:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:42.602 18:09:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:42.602 18:09:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:42.602 18:09:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.602 18:09:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:42.602 /dev/nbd0 00:06:42.602 18:09:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:42.602 18:09:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.602 1+0 records in 00:06:42.602 1+0 records out 00:06:42.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218861 s, 18.7 MB/s 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:42.602 18:09:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:42.602 18:09:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.602 18:09:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.602 18:09:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:42.861 /dev/nbd1 00:06:42.861 18:09:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:42.861 18:09:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.861 1+0 records in 00:06:42.861 1+0 records out 00:06:42.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026798 s, 15.3 MB/s 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:42.861 18:09:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:42.861 18:09:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.861 18:09:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.861 18:09:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.861 18:09:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.861 18:09:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.120 { 00:06:43.120 "nbd_device": "/dev/nbd0", 00:06:43.120 "bdev_name": "Malloc0" 00:06:43.120 }, 00:06:43.120 { 00:06:43.120 "nbd_device": "/dev/nbd1", 00:06:43.120 "bdev_name": "Malloc1" 00:06:43.120 } 00:06:43.120 ]' 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.120 { 00:06:43.120 "nbd_device": "/dev/nbd0", 00:06:43.120 "bdev_name": "Malloc0" 00:06:43.120 }, 00:06:43.120 { 00:06:43.120 "nbd_device": "/dev/nbd1", 00:06:43.120 "bdev_name": "Malloc1" 00:06:43.120 } 00:06:43.120 ]' 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:43.120 /dev/nbd1' 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:43.120 /dev/nbd1' 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:43.120 256+0 records in 00:06:43.120 256+0 records out 00:06:43.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115214 s, 91.0 MB/s 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:43.120 256+0 records in 00:06:43.120 256+0 records out 00:06:43.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121988 s, 86.0 MB/s 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:43.120 256+0 records in 00:06:43.120 256+0 records out 00:06:43.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168357 s, 62.3 MB/s 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.120 18:09:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:43.379 18:09:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.379 18:09:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.379 18:09:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.379 18:09:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.379 18:09:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.379 18:09:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.379 18:09:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:43.379 18:09:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.379 18:09:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.379 18:09:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:43.637 18:09:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:43.637 18:09:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:43.637 18:09:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:43.637 18:09:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.637 18:09:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.637 18:09:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:43.637 18:09:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:43.637 18:09:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.637 18:09:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.637 18:09:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.637 18:09:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.896 18:09:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:43.896 18:09:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.896 18:09:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:43.896 18:09:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.896 18:09:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.896 18:09:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.896 18:09:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:43.896 18:09:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.896 18:09:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.896 18:09:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:43.896 18:09:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:43.896 18:09:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:43.896 18:09:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.896 18:09:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:44.157 [2024-07-24 18:09:52.671291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.157 [2024-07-24 18:09:52.738249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.157 [2024-07-24 18:09:52.738253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.415 [2024-07-24 18:09:52.779363] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:44.415 [2024-07-24 18:09:52.779409] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.945 18:09:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:46.945 18:09:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:46.945 spdk_app_start Round 1 00:06:46.945 18:09:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2114616 /var/tmp/spdk-nbd.sock 00:06:46.945 18:09:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2114616 ']' 00:06:46.945 18:09:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.945 18:09:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.945 18:09:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.945 18:09:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.945 18:09:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.204 18:09:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.204 18:09:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:47.204 18:09:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:47.463 Malloc0 00:06:47.463 18:09:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:47.463 Malloc1 00:06:47.463 18:09:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.463 18:09:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:47.723 /dev/nbd0 00:06:47.723 18:09:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.723 18:09:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.723 1+0 records in 00:06:47.723 1+0 records out 00:06:47.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259804 s, 15.8 MB/s 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:47.723 18:09:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:47.723 18:09:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.723 18:09:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.723 18:09:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:47.981 /dev/nbd1 00:06:47.981 18:09:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.981 18:09:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.981 1+0 records in 00:06:47.981 1+0 records out 00:06:47.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257591 s, 15.9 MB/s 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:47.981 18:09:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:47.981 18:09:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.981 18:09:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.981 18:09:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.981 18:09:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.981 18:09:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:48.240 { 00:06:48.240 "nbd_device": "/dev/nbd0", 00:06:48.240 "bdev_name": "Malloc0" 00:06:48.240 }, 00:06:48.240 { 00:06:48.240 "nbd_device": "/dev/nbd1", 00:06:48.240 "bdev_name": "Malloc1" 00:06:48.240 } 00:06:48.240 ]' 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:48.240 { 00:06:48.240 "nbd_device": "/dev/nbd0", 00:06:48.240 "bdev_name": "Malloc0" 00:06:48.240 }, 00:06:48.240 { 00:06:48.240 "nbd_device": "/dev/nbd1", 00:06:48.240 "bdev_name": "Malloc1" 00:06:48.240 } 00:06:48.240 ]' 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:48.240 /dev/nbd1' 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:48.240 /dev/nbd1' 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:48.240 256+0 records in 00:06:48.240 256+0 records out 00:06:48.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104287 s, 101 MB/s 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:48.240 256+0 records in 00:06:48.240 256+0 records out 00:06:48.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195695 s, 53.6 MB/s 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:48.240 256+0 records in 00:06:48.240 256+0 records out 00:06:48.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171403 s, 61.2 MB/s 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.240 18:09:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:48.499 18:09:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:48.499 18:09:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:48.499 18:09:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:48.499 18:09:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.499 18:09:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.499 18:09:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:48.499 18:09:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.499 18:09:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.499 18:09:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.499 18:09:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.758 18:09:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.017 18:09:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.017 18:09:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.017 18:09:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.017 18:09:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:49.017 18:09:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.017 18:09:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.017 18:09:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:49.017 18:09:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:49.017 18:09:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:49.017 18:09:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:49.017 18:09:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:49.276 [2024-07-24 18:09:57.776004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.276 [2024-07-24 18:09:57.840317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.276 [2024-07-24 18:09:57.840320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.536 [2024-07-24 18:09:57.882593] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:49.536 [2024-07-24 18:09:57.882635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:52.135 18:10:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:52.135 18:10:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:52.135 spdk_app_start Round 2 00:06:52.135 18:10:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2114616 /var/tmp/spdk-nbd.sock 00:06:52.135 18:10:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2114616 ']' 00:06:52.135 18:10:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.135 18:10:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.135 18:10:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.135 18:10:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.135 18:10:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.394 18:10:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.394 18:10:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:52.394 18:10:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.394 Malloc0 00:06:52.394 18:10:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.654 Malloc1 00:06:52.654 18:10:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.654 18:10:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.913 /dev/nbd0 00:06:52.913 18:10:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.913 18:10:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.913 1+0 records in 00:06:52.913 1+0 records out 00:06:52.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221027 s, 18.5 MB/s 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.913 18:10:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:52.913 18:10:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.913 18:10:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.913 18:10:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.913 /dev/nbd1 00:06:53.172 18:10:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:53.172 18:10:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.172 1+0 records in 00:06:53.172 1+0 records out 00:06:53.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252535 s, 16.2 MB/s 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:53.172 18:10:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:53.172 18:10:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.172 18:10:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.173 { 00:06:53.173 "nbd_device": "/dev/nbd0", 00:06:53.173 "bdev_name": "Malloc0" 00:06:53.173 }, 00:06:53.173 { 00:06:53.173 "nbd_device": "/dev/nbd1", 00:06:53.173 "bdev_name": "Malloc1" 00:06:53.173 } 00:06:53.173 ]' 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.173 { 00:06:53.173 "nbd_device": "/dev/nbd0", 00:06:53.173 "bdev_name": "Malloc0" 00:06:53.173 }, 00:06:53.173 { 00:06:53.173 "nbd_device": "/dev/nbd1", 00:06:53.173 "bdev_name": "Malloc1" 00:06:53.173 } 00:06:53.173 ]' 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.173 /dev/nbd1' 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.173 /dev/nbd1' 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.173 18:10:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:53.432 256+0 records in 00:06:53.432 256+0 records out 00:06:53.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011154 s, 94.0 MB/s 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.432 256+0 records in 00:06:53.432 256+0 records out 00:06:53.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193976 s, 54.1 MB/s 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.432 256+0 records in 00:06:53.432 256+0 records out 00:06:53.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208732 s, 50.2 MB/s 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.432 18:10:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.691 18:10:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.951 18:10:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.951 18:10:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.951 18:10:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.951 18:10:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.951 18:10:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.951 18:10:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.951 18:10:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.951 18:10:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.951 18:10:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.951 18:10:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.951 18:10:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.951 18:10:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.951 18:10:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.210 18:10:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:54.475 [2024-07-24 18:10:02.851800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.475 [2024-07-24 18:10:02.916315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.475 [2024-07-24 18:10:02.916319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.475 [2024-07-24 18:10:02.957484] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.475 [2024-07-24 18:10:02.957526] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.764 18:10:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2114616 /var/tmp/spdk-nbd.sock 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2114616 ']' 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:57.764 18:10:05 event.app_repeat -- event/event.sh@39 -- # killprocess 2114616 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2114616 ']' 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2114616 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2114616 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2114616' 00:06:57.764 killing process with pid 2114616 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2114616 00:06:57.764 18:10:05 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2114616 00:06:57.764 spdk_app_start is called in Round 0. 00:06:57.764 Shutdown signal received, stop current app iteration 00:06:57.764 Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 reinitialization... 00:06:57.764 spdk_app_start is called in Round 1. 00:06:57.764 Shutdown signal received, stop current app iteration 00:06:57.764 Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 reinitialization... 00:06:57.764 spdk_app_start is called in Round 2. 00:06:57.764 Shutdown signal received, stop current app iteration 00:06:57.764 Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 reinitialization... 00:06:57.764 spdk_app_start is called in Round 3. 00:06:57.764 Shutdown signal received, stop current app iteration 00:06:57.764 18:10:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:57.764 18:10:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:57.764 00:06:57.764 real 0m16.298s 00:06:57.764 user 0m34.515s 00:06:57.764 sys 0m3.085s 00:06:57.764 18:10:06 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.764 18:10:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.764 ************************************ 00:06:57.764 END TEST app_repeat 00:06:57.764 ************************************ 00:06:57.764 18:10:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:57.764 00:06:57.764 real 0m25.387s 00:06:57.764 user 0m50.216s 00:06:57.764 sys 0m4.274s 00:06:57.764 18:10:06 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.764 18:10:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.764 ************************************ 00:06:57.764 END TEST event 00:06:57.764 ************************************ 00:06:57.764 18:10:06 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:57.764 18:10:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.764 18:10:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.764 18:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:57.764 ************************************ 00:06:57.764 START TEST thread 00:06:57.764 ************************************ 00:06:57.764 18:10:06 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:57.764 * Looking for test storage... 00:06:57.764 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:06:57.764 18:10:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.764 18:10:06 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:57.764 18:10:06 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.764 18:10:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.764 ************************************ 00:06:57.764 START TEST thread_poller_perf 00:06:57.764 ************************************ 00:06:57.764 18:10:06 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.764 [2024-07-24 18:10:06.334362] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:06:57.764 [2024-07-24 18:10:06.334415] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117664 ] 00:06:58.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.024 EAL: Requested device 0000:b3:01.0 cannot be used 00:06:58.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.024 EAL: Requested device 0000:b3:01.1 cannot be used 00:06:58.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.024 EAL: Requested device 0000:b3:01.2 cannot be used 00:06:58.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.024 EAL: Requested device 0000:b3:01.3 cannot be used 00:06:58.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.024 EAL: Requested device 0000:b3:01.4 cannot be used 00:06:58.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.024 EAL: Requested device 0000:b3:01.5 cannot be used 00:06:58.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b3:01.6 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b3:01.7 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b3:02.0 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b3:02.1 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b3:02.2 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b3:02.3 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b3:02.4 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b3:02.5 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b3:02.6 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b3:02.7 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:01.0 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:01.1 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:01.2 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:01.3 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:01.4 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:01.5 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:01.6 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:01.7 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:02.0 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:02.1 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:02.2 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:02.3 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:02.4 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:02.5 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:02.6 cannot be used 00:06:58.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.025 EAL: Requested device 0000:b5:02.7 cannot be used 00:06:58.025 [2024-07-24 18:10:06.426532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.025 [2024-07-24 18:10:06.497204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.025 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.404 ====================================== 00:06:59.404 busy:2509466910 (cyc) 00:06:59.404 total_run_count: 432000 00:06:59.404 tsc_hz: 2500000000 (cyc) 00:06:59.404 ====================================== 00:06:59.404 poller_cost: 5808 (cyc), 2323 (nsec) 00:06:59.404 00:06:59.404 real 0m1.253s 00:06:59.404 user 0m1.144s 00:06:59.404 sys 0m0.105s 00:06:59.404 18:10:07 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.404 18:10:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.404 ************************************ 00:06:59.404 END TEST thread_poller_perf 00:06:59.404 ************************************ 00:06:59.404 18:10:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.404 18:10:07 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:59.404 18:10:07 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.404 18:10:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.404 ************************************ 00:06:59.404 START TEST thread_poller_perf 00:06:59.404 ************************************ 00:06:59.404 18:10:07 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.404 [2024-07-24 18:10:07.679338] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:06:59.404 [2024-07-24 18:10:07.679394] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117951 ] 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:01.0 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:01.1 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:01.2 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:01.3 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:01.4 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:01.5 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:01.6 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:01.7 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:02.0 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:02.1 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:02.2 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:02.3 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:02.4 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:02.5 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:02.6 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b3:02.7 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b5:01.0 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.404 EAL: Requested device 0000:b5:01.1 cannot be used 00:06:59.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:01.2 cannot be used 00:06:59.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:01.3 cannot be used 00:06:59.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:01.4 cannot be used 00:06:59.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:01.5 cannot be used 00:06:59.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:01.6 cannot be used 00:06:59.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:01.7 cannot be used 00:06:59.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:02.0 cannot be used 00:06:59.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:02.1 cannot be used 00:06:59.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:02.2 cannot be used 00:06:59.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:02.3 cannot be used 00:06:59.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:02.4 cannot be used 00:06:59.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:02.5 cannot be used 00:06:59.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:02.6 cannot be used 00:06:59.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.405 EAL: Requested device 0000:b5:02.7 cannot be used 00:06:59.405 [2024-07-24 18:10:07.772797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.405 [2024-07-24 18:10:07.838672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.405 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:00.342 ====================================== 00:07:00.342 busy:2501624302 (cyc) 00:07:00.342 total_run_count: 5633000 00:07:00.342 tsc_hz: 2500000000 (cyc) 00:07:00.342 ====================================== 00:07:00.342 poller_cost: 444 (cyc), 177 (nsec) 00:07:00.342 00:07:00.342 real 0m1.254s 00:07:00.342 user 0m1.148s 00:07:00.342 sys 0m0.101s 00:07:00.342 18:10:08 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.342 18:10:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.342 ************************************ 00:07:00.342 END TEST thread_poller_perf 00:07:00.342 ************************************ 00:07:00.601 18:10:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:00.601 00:07:00.601 real 0m2.782s 00:07:00.601 user 0m2.394s 00:07:00.601 sys 0m0.400s 00:07:00.601 18:10:08 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.602 18:10:08 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.602 ************************************ 00:07:00.602 END TEST thread 00:07:00.602 ************************************ 00:07:00.602 18:10:08 -- spdk/autotest.sh@184 -- # [[ 1 -eq 1 ]] 00:07:00.602 18:10:08 -- spdk/autotest.sh@185 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:07:00.602 18:10:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.602 18:10:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.602 18:10:08 -- common/autotest_common.sh@10 -- # set +x 00:07:00.602 ************************************ 00:07:00.602 START TEST accel 00:07:00.602 ************************************ 00:07:00.602 18:10:09 accel -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:07:00.602 * Looking for test storage... 00:07:00.602 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:07:00.602 18:10:09 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:00.602 18:10:09 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:00.602 18:10:09 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:00.602 18:10:09 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2118277 00:07:00.602 18:10:09 accel -- accel/accel.sh@63 -- # waitforlisten 2118277 00:07:00.602 18:10:09 accel -- common/autotest_common.sh@831 -- # '[' -z 2118277 ']' 00:07:00.602 18:10:09 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:00.602 18:10:09 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.602 18:10:09 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:00.602 18:10:09 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.602 18:10:09 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.602 18:10:09 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.602 18:10:09 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.602 18:10:09 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.602 18:10:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.602 18:10:09 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.602 18:10:09 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.602 18:10:09 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.602 18:10:09 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:00.602 18:10:09 accel -- accel/accel.sh@41 -- # jq -r . 00:07:00.862 [2024-07-24 18:10:09.202812] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:00.862 [2024-07-24 18:10:09.202865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118277 ] 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:00.862 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:00.862 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:00.862 [2024-07-24 18:10:09.296325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.862 [2024-07-24 18:10:09.366263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.431 18:10:09 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.431 18:10:09 accel -- common/autotest_common.sh@864 -- # return 0 00:07:01.431 18:10:09 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:01.431 18:10:09 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:01.431 18:10:09 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:01.431 18:10:09 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:01.431 18:10:09 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:01.431 18:10:09 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:01.431 18:10:09 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.431 18:10:09 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:01.431 18:10:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.431 18:10:10 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.690 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.691 18:10:10 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.691 18:10:10 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.691 18:10:10 accel -- accel/accel.sh@75 -- # killprocess 2118277 00:07:01.691 18:10:10 accel -- common/autotest_common.sh@950 -- # '[' -z 2118277 ']' 00:07:01.691 18:10:10 accel -- common/autotest_common.sh@954 -- # kill -0 2118277 00:07:01.691 18:10:10 accel -- common/autotest_common.sh@955 -- # uname 00:07:01.691 18:10:10 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.691 18:10:10 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2118277 00:07:01.691 18:10:10 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.691 18:10:10 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.691 18:10:10 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2118277' 00:07:01.691 killing process with pid 2118277 00:07:01.691 18:10:10 accel -- common/autotest_common.sh@969 -- # kill 2118277 00:07:01.691 18:10:10 accel -- common/autotest_common.sh@974 -- # wait 2118277 00:07:01.951 18:10:10 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:01.951 18:10:10 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:01.951 18:10:10 accel -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:01.951 18:10:10 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.951 18:10:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.951 18:10:10 accel.accel_help -- common/autotest_common.sh@1125 -- # accel_perf -h 00:07:01.951 18:10:10 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:01.951 18:10:10 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:01.951 18:10:10 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.951 18:10:10 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.951 18:10:10 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.951 18:10:10 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.951 18:10:10 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.951 18:10:10 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:01.951 18:10:10 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:01.951 18:10:10 accel.accel_help -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.951 18:10:10 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:01.951 18:10:10 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:01.951 18:10:10 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:01.951 18:10:10 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.951 18:10:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.210 ************************************ 00:07:02.210 START TEST accel_missing_filename 00:07:02.210 ************************************ 00:07:02.211 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress 00:07:02.211 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # local es=0 00:07:02.211 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:02.211 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:02.211 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.211 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:02.211 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.211 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:07:02.211 18:10:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:02.211 18:10:10 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:02.211 18:10:10 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.211 18:10:10 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.211 18:10:10 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.211 18:10:10 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.211 18:10:10 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.211 18:10:10 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:02.211 18:10:10 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:02.211 [2024-07-24 18:10:10.609772] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:02.211 [2024-07-24 18:10:10.609827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118575 ] 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:02.211 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.211 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:02.211 [2024-07-24 18:10:10.704610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.211 [2024-07-24 18:10:10.774559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.471 [2024-07-24 18:10:10.833227] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.471 [2024-07-24 18:10:10.894016] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:02.471 A filename is required. 00:07:02.471 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # es=234 00:07:02.471 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.471 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=106 00:07:02.471 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@663 -- # case "$es" in 00:07:02.471 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@670 -- # es=1 00:07:02.471 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.471 00:07:02.471 real 0m0.392s 00:07:02.471 user 0m0.249s 00:07:02.471 sys 0m0.163s 00:07:02.471 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.471 18:10:10 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:02.471 ************************************ 00:07:02.471 END TEST accel_missing_filename 00:07:02.471 ************************************ 00:07:02.471 18:10:11 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:02.471 18:10:11 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:02.471 18:10:11 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.471 18:10:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.471 ************************************ 00:07:02.471 START TEST accel_compress_verify 00:07:02.471 ************************************ 00:07:02.471 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:02.471 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # local es=0 00:07:02.471 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:02.471 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:02.471 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.471 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:02.471 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.471 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:02.471 18:10:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:02.471 18:10:11 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:02.471 18:10:11 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.471 18:10:11 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.471 18:10:11 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.471 18:10:11 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.471 18:10:11 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.471 18:10:11 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:02.471 18:10:11 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:02.731 [2024-07-24 18:10:11.081255] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:02.731 [2024-07-24 18:10:11.081317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118617 ] 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.731 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:02.731 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.732 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:02.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.732 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:02.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.732 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:02.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.732 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:02.732 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.732 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:02.732 [2024-07-24 18:10:11.177613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.732 [2024-07-24 18:10:11.249936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.732 [2024-07-24 18:10:11.307141] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.992 [2024-07-24 18:10:11.367723] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:02.992 00:07:02.992 Compression does not support the verify option, aborting. 00:07:02.992 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # es=161 00:07:02.992 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.992 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=33 00:07:02.992 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@663 -- # case "$es" in 00:07:02.992 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@670 -- # es=1 00:07:02.992 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.992 00:07:02.992 real 0m0.392s 00:07:02.992 user 0m0.256s 00:07:02.992 sys 0m0.158s 00:07:02.992 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.992 18:10:11 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:02.992 ************************************ 00:07:02.992 END TEST accel_compress_verify 00:07:02.992 ************************************ 00:07:02.992 18:10:11 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:02.992 18:10:11 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:02.992 18:10:11 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.992 18:10:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.992 ************************************ 00:07:02.992 START TEST accel_wrong_workload 00:07:02.992 ************************************ 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w foobar 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # local es=0 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:07:02.992 18:10:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:02.992 18:10:11 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:02.992 18:10:11 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.992 18:10:11 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.992 18:10:11 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.992 18:10:11 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.992 18:10:11 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.992 18:10:11 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:02.992 18:10:11 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:02.992 Unsupported workload type: foobar 00:07:02.992 [2024-07-24 18:10:11.559981] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:02.992 accel_perf options: 00:07:02.992 [-h help message] 00:07:02.992 [-q queue depth per core] 00:07:02.992 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:02.992 [-T number of threads per core 00:07:02.992 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:02.992 [-t time in seconds] 00:07:02.992 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:02.992 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:02.992 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:02.992 [-l for compress/decompress workloads, name of uncompressed input file 00:07:02.992 [-S for crc32c workload, use this seed value (default 0) 00:07:02.992 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:02.992 [-f for fill workload, use this BYTE value (default 255) 00:07:02.992 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:02.992 [-y verify result if this switch is on] 00:07:02.992 [-a tasks to allocate per core (default: same value as -q)] 00:07:02.992 Can be used to spread operations across a wider range of memory. 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # es=1 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.992 00:07:02.992 real 0m0.044s 00:07:02.992 user 0m0.025s 00:07:02.992 sys 0m0.019s 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.992 18:10:11 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:02.992 ************************************ 00:07:02.992 END TEST accel_wrong_workload 00:07:02.992 ************************************ 00:07:02.992 Error: writing output failed: Broken pipe 00:07:03.251 18:10:11 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:03.251 18:10:11 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:03.251 18:10:11 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.251 18:10:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.251 ************************************ 00:07:03.251 START TEST accel_negative_buffers 00:07:03.251 ************************************ 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # local es=0 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:07:03.251 18:10:11 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:03.251 18:10:11 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:03.251 18:10:11 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.251 18:10:11 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.251 18:10:11 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.251 18:10:11 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.251 18:10:11 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.251 18:10:11 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:03.251 18:10:11 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:03.251 -x option must be non-negative. 00:07:03.251 [2024-07-24 18:10:11.665668] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:03.251 accel_perf options: 00:07:03.251 [-h help message] 00:07:03.251 [-q queue depth per core] 00:07:03.251 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:03.251 [-T number of threads per core 00:07:03.251 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:03.251 [-t time in seconds] 00:07:03.251 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:03.251 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:03.251 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:03.251 [-l for compress/decompress workloads, name of uncompressed input file 00:07:03.251 [-S for crc32c workload, use this seed value (default 0) 00:07:03.251 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:03.251 [-f for fill workload, use this BYTE value (default 255) 00:07:03.251 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:03.251 [-y verify result if this switch is on] 00:07:03.251 [-a tasks to allocate per core (default: same value as -q)] 00:07:03.251 Can be used to spread operations across a wider range of memory. 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # es=1 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.251 00:07:03.251 real 0m0.028s 00:07:03.251 user 0m0.018s 00:07:03.251 sys 0m0.010s 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.251 18:10:11 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:03.251 ************************************ 00:07:03.251 END TEST accel_negative_buffers 00:07:03.251 ************************************ 00:07:03.251 Error: writing output failed: Broken pipe 00:07:03.251 18:10:11 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:03.251 18:10:11 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:03.251 18:10:11 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.251 18:10:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.251 ************************************ 00:07:03.251 START TEST accel_crc32c 00:07:03.251 ************************************ 00:07:03.251 18:10:11 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:03.251 18:10:11 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:03.251 18:10:11 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:03.251 18:10:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.251 18:10:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.251 18:10:11 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:03.252 18:10:11 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:03.252 18:10:11 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:03.252 18:10:11 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.252 18:10:11 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.252 18:10:11 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.252 18:10:11 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.252 18:10:11 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.252 18:10:11 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:03.252 18:10:11 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:03.252 [2024-07-24 18:10:11.792451] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:03.252 [2024-07-24 18:10:11.792511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118922 ] 00:07:03.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.252 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:03.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.252 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:03.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.252 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:03.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.252 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:03.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.252 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:03.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.252 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:03.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.252 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:03.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.252 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:03.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.252 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:03.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.252 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:03.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.252 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:03.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.252 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:03.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.511 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:03.511 [2024-07-24 18:10:11.887376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.511 [2024-07-24 18:10:11.956144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.511 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.512 18:10:12 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:04.893 18:10:13 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.893 00:07:04.893 real 0m1.391s 00:07:04.893 user 0m1.242s 00:07:04.893 sys 0m0.154s 00:07:04.893 18:10:13 accel.accel_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.893 18:10:13 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:04.893 ************************************ 00:07:04.893 END TEST accel_crc32c 00:07:04.893 ************************************ 00:07:04.893 18:10:13 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:04.893 18:10:13 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:04.893 18:10:13 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.893 18:10:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.893 ************************************ 00:07:04.893 START TEST accel_crc32c_C2 00:07:04.893 ************************************ 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:04.893 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:04.893 [2024-07-24 18:10:13.265787] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:04.893 [2024-07-24 18:10:13.265850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119201 ] 00:07:04.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.893 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:04.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.893 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:04.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.893 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:04.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.893 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:04.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:04.894 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.894 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:04.894 [2024-07-24 18:10:13.356847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.894 [2024-07-24 18:10:13.425253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.894 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.155 18:10:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.093 00:07:06.093 real 0m1.385s 00:07:06.093 user 0m1.243s 00:07:06.093 sys 0m0.146s 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.093 18:10:14 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:06.093 ************************************ 00:07:06.093 END TEST accel_crc32c_C2 00:07:06.093 ************************************ 00:07:06.093 18:10:14 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:06.093 18:10:14 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:06.093 18:10:14 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.093 18:10:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.353 ************************************ 00:07:06.353 START TEST accel_copy 00:07:06.353 ************************************ 00:07:06.353 18:10:14 accel.accel_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy -y 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:06.353 18:10:14 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:06.353 [2024-07-24 18:10:14.733328] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:06.353 [2024-07-24 18:10:14.733392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119484 ] 00:07:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.353 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.353 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.353 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.353 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.353 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.353 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.353 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.353 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.353 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.353 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:06.354 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.354 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:06.354 [2024-07-24 18:10:14.825168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.354 [2024-07-24 18:10:14.894654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.613 18:10:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.591 18:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.591 18:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.591 18:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.591 18:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.591 18:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.591 18:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.591 18:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.591 18:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.591 18:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.591 18:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.591 18:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.591 18:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.591 18:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:07.592 18:10:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.592 00:07:07.592 real 0m1.389s 00:07:07.592 user 0m1.235s 00:07:07.592 sys 0m0.155s 00:07:07.592 18:10:16 accel.accel_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.592 18:10:16 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:07.592 ************************************ 00:07:07.592 END TEST accel_copy 00:07:07.592 ************************************ 00:07:07.592 18:10:16 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.592 18:10:16 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:07.592 18:10:16 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.592 18:10:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.592 ************************************ 00:07:07.592 START TEST accel_fill 00:07:07.592 ************************************ 00:07:07.592 18:10:16 accel.accel_fill -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:07.592 18:10:16 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:07.852 [2024-07-24 18:10:16.208689] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:07.852 [2024-07-24 18:10:16.208750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119746 ] 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:07.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.852 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:07.852 [2024-07-24 18:10:16.301052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.852 [2024-07-24 18:10:16.368669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.853 18:10:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:09.233 18:10:17 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.233 00:07:09.233 real 0m1.382s 00:07:09.233 user 0m1.238s 00:07:09.233 sys 0m0.153s 00:07:09.233 18:10:17 accel.accel_fill -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.233 18:10:17 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:09.233 ************************************ 00:07:09.233 END TEST accel_fill 00:07:09.233 ************************************ 00:07:09.233 18:10:17 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:09.233 18:10:17 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:09.233 18:10:17 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.233 18:10:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.233 ************************************ 00:07:09.233 START TEST accel_copy_crc32c 00:07:09.233 ************************************ 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:09.233 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:09.233 [2024-07-24 18:10:17.674105] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:09.233 [2024-07-24 18:10:17.674163] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119992 ] 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.233 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:09.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.234 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:09.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.234 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:09.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.234 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:09.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.234 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:09.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.234 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:09.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.234 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:09.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.234 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:09.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.234 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:09.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.234 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:09.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.234 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:09.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.234 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:09.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:09.234 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:09.234 [2024-07-24 18:10:17.767074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.493 [2024-07-24 18:10:17.837653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:09.493 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.494 18:10:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.431 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.431 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.431 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.431 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.431 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.431 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.431 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.431 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.431 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.431 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.431 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.431 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.691 00:07:10.691 real 0m1.389s 00:07:10.691 user 0m1.258s 00:07:10.691 sys 0m0.137s 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.691 18:10:19 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:10.691 ************************************ 00:07:10.691 END TEST accel_copy_crc32c 00:07:10.691 ************************************ 00:07:10.691 18:10:19 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:10.691 18:10:19 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:10.691 18:10:19 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.691 18:10:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.691 ************************************ 00:07:10.691 START TEST accel_copy_crc32c_C2 00:07:10.691 ************************************ 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:10.691 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:10.691 [2024-07-24 18:10:19.150038] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:10.691 [2024-07-24 18:10:19.150096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120243 ] 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.691 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:10.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.692 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:10.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.692 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:10.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.692 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:10.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.692 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:10.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.692 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:10.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.692 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:10.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.692 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:10.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.692 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:10.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.692 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:10.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.692 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:10.692 [2024-07-24 18:10:19.242754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.951 [2024-07-24 18:10:19.313437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.951 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.952 18:10:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:12.332 18:10:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.332 00:07:12.332 real 0m1.387s 00:07:12.332 user 0m1.237s 00:07:12.332 sys 0m0.159s 00:07:12.333 18:10:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.333 18:10:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:12.333 ************************************ 00:07:12.333 END TEST accel_copy_crc32c_C2 00:07:12.333 ************************************ 00:07:12.333 18:10:20 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:12.333 18:10:20 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:12.333 18:10:20 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.333 18:10:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.333 ************************************ 00:07:12.333 START TEST accel_dualcast 00:07:12.333 ************************************ 00:07:12.333 18:10:20 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dualcast -y 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:12.333 [2024-07-24 18:10:20.615608] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:12.333 [2024-07-24 18:10:20.615678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120497 ] 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:12.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:12.333 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:12.333 [2024-07-24 18:10:20.705586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.333 [2024-07-24 18:10:20.774703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.333 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.334 18:10:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:13.714 18:10:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.714 00:07:13.714 real 0m1.383s 00:07:13.714 user 0m1.233s 00:07:13.714 sys 0m0.154s 00:07:13.714 18:10:21 accel.accel_dualcast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.714 18:10:21 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:13.714 ************************************ 00:07:13.714 END TEST accel_dualcast 00:07:13.714 ************************************ 00:07:13.714 18:10:22 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:13.714 18:10:22 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:13.714 18:10:22 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.714 18:10:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.714 ************************************ 00:07:13.714 START TEST accel_compare 00:07:13.714 ************************************ 00:07:13.714 18:10:22 accel.accel_compare -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compare -y 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:13.714 18:10:22 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:13.714 [2024-07-24 18:10:22.067816] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:13.714 [2024-07-24 18:10:22.067862] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120738 ] 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.714 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:13.714 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.715 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:13.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.715 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:13.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.715 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:13.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.715 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:13.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.715 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:13.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.715 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:13.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.715 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:13.715 [2024-07-24 18:10:22.158973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.715 [2024-07-24 18:10:22.229549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.715 18:10:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:15.093 18:10:23 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.093 00:07:15.093 real 0m1.370s 00:07:15.093 user 0m1.223s 00:07:15.093 sys 0m0.151s 00:07:15.093 18:10:23 accel.accel_compare -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.093 18:10:23 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:15.093 ************************************ 00:07:15.093 END TEST accel_compare 00:07:15.093 ************************************ 00:07:15.093 18:10:23 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:15.093 18:10:23 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:15.093 18:10:23 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.093 18:10:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.093 ************************************ 00:07:15.093 START TEST accel_xor 00:07:15.093 ************************************ 00:07:15.093 18:10:23 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:15.093 18:10:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:15.093 [2024-07-24 18:10:23.532092] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:15.093 [2024-07-24 18:10:23.532146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120975 ] 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.093 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:15.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.094 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:15.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.094 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:15.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.094 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:15.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:15.094 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:15.094 [2024-07-24 18:10:23.623866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.353 [2024-07-24 18:10:23.693491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.353 18:10:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.291 18:10:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.291 18:10:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.291 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.291 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.291 18:10:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:16.292 18:10:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.292 00:07:16.292 real 0m1.387s 00:07:16.292 user 0m1.234s 00:07:16.292 sys 0m0.157s 00:07:16.292 18:10:24 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.292 18:10:24 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:16.292 ************************************ 00:07:16.292 END TEST accel_xor 00:07:16.292 ************************************ 00:07:16.551 18:10:24 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:16.551 18:10:24 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:16.551 18:10:24 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.551 18:10:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.551 ************************************ 00:07:16.551 START TEST accel_xor 00:07:16.551 ************************************ 00:07:16.551 18:10:24 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y -x 3 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:16.551 18:10:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:16.551 [2024-07-24 18:10:24.995558] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:16.551 [2024-07-24 18:10:24.995607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121241 ] 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:16.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.551 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:16.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.552 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:16.552 [2024-07-24 18:10:25.091231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.811 [2024-07-24 18:10:25.164981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.811 18:10:25 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.812 18:10:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.191 18:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.192 18:10:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.192 18:10:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.192 18:10:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.192 18:10:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.192 18:10:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.192 18:10:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:18.192 18:10:26 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.192 00:07:18.192 real 0m1.388s 00:07:18.192 user 0m1.239s 00:07:18.192 sys 0m0.157s 00:07:18.192 18:10:26 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.192 18:10:26 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:18.192 ************************************ 00:07:18.192 END TEST accel_xor 00:07:18.192 ************************************ 00:07:18.192 18:10:26 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:18.192 18:10:26 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:18.192 18:10:26 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.192 18:10:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.192 ************************************ 00:07:18.192 START TEST accel_dif_verify 00:07:18.192 ************************************ 00:07:18.192 18:10:26 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_verify 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:18.192 [2024-07-24 18:10:26.471285] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:18.192 [2024-07-24 18:10:26.471334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121525 ] 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:18.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.192 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:18.192 [2024-07-24 18:10:26.563606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.192 [2024-07-24 18:10:26.632690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:18.192 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:18.193 18:10:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:19.572 18:10:27 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.572 00:07:19.572 real 0m1.392s 00:07:19.572 user 0m1.237s 00:07:19.572 sys 0m0.156s 00:07:19.572 18:10:27 accel.accel_dif_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.572 18:10:27 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:19.572 ************************************ 00:07:19.572 END TEST accel_dif_verify 00:07:19.572 ************************************ 00:07:19.572 18:10:27 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:19.572 18:10:27 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:19.572 18:10:27 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.572 18:10:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.573 ************************************ 00:07:19.573 START TEST accel_dif_generate 00:07:19.573 ************************************ 00:07:19.573 18:10:27 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:19.573 18:10:27 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:19.573 [2024-07-24 18:10:27.948413] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:19.573 [2024-07-24 18:10:27.948468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121804 ] 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:19.573 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:19.573 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:19.573 [2024-07-24 18:10:28.041710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.573 [2024-07-24 18:10:28.114813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.832 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.832 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.832 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.832 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.832 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.832 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.833 18:10:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.771 18:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.771 18:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.771 18:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.771 18:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.771 18:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.771 18:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.771 18:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.771 18:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.771 18:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.771 18:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.771 18:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.771 18:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.771 18:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:20.772 18:10:29 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.772 00:07:20.772 real 0m1.390s 00:07:20.772 user 0m1.252s 00:07:20.772 sys 0m0.145s 00:07:20.772 18:10:29 accel.accel_dif_generate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.772 18:10:29 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:20.772 ************************************ 00:07:20.772 END TEST accel_dif_generate 00:07:20.772 ************************************ 00:07:20.772 18:10:29 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:20.772 18:10:29 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:20.772 18:10:29 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.772 18:10:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.032 ************************************ 00:07:21.032 START TEST accel_dif_generate_copy 00:07:21.032 ************************************ 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate_copy 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:21.032 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:21.032 [2024-07-24 18:10:29.418182] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:21.032 [2024-07-24 18:10:29.418241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122091 ] 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:21.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.032 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:21.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.033 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:21.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.033 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:21.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.033 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:21.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.033 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:21.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.033 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:21.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.033 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:21.033 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:21.033 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:21.033 [2024-07-24 18:10:29.509262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.033 [2024-07-24 18:10:29.577407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 18:10:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.228 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.228 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.228 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.228 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.228 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.228 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.228 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.229 00:07:22.229 real 0m1.390s 00:07:22.229 user 0m1.244s 00:07:22.229 sys 0m0.146s 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.229 18:10:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:22.229 ************************************ 00:07:22.229 END TEST accel_dif_generate_copy 00:07:22.229 ************************************ 00:07:22.229 18:10:30 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:22.229 18:10:30 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:22.229 18:10:30 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:22.229 18:10:30 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.229 18:10:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.488 ************************************ 00:07:22.488 START TEST accel_comp 00:07:22.488 ************************************ 00:07:22.488 18:10:30 accel.accel_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:22.488 18:10:30 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:22.488 [2024-07-24 18:10:30.893711] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:22.488 [2024-07-24 18:10:30.893772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122374 ] 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.488 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:22.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:22.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.489 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:22.489 [2024-07-24 18:10:30.985105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.489 [2024-07-24 18:10:31.054999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.748 18:10:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:23.715 18:10:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.715 00:07:23.715 real 0m1.392s 00:07:23.715 user 0m1.245s 00:07:23.715 sys 0m0.150s 00:07:23.715 18:10:32 accel.accel_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.715 18:10:32 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:23.715 ************************************ 00:07:23.715 END TEST accel_comp 00:07:23.715 ************************************ 00:07:23.975 18:10:32 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:23.975 18:10:32 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:23.975 18:10:32 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.975 18:10:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.975 ************************************ 00:07:23.975 START TEST accel_decomp 00:07:23.975 ************************************ 00:07:23.975 18:10:32 accel.accel_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:23.975 18:10:32 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:23.975 [2024-07-24 18:10:32.370410] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:23.975 [2024-07-24 18:10:32.370466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122668 ] 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:23.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.975 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:23.975 [2024-07-24 18:10:32.459777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.975 [2024-07-24 18:10:32.528298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.235 18:10:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.170 18:10:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.170 00:07:25.170 real 0m1.387s 00:07:25.170 user 0m1.241s 00:07:25.170 sys 0m0.150s 00:07:25.170 18:10:33 accel.accel_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.170 18:10:33 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:25.170 ************************************ 00:07:25.170 END TEST accel_decomp 00:07:25.170 ************************************ 00:07:25.170 18:10:33 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.428 18:10:33 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:25.428 18:10:33 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.428 18:10:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.428 ************************************ 00:07:25.428 START TEST accel_decomp_full 00:07:25.428 ************************************ 00:07:25.428 18:10:33 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:25.428 18:10:33 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:25.428 [2024-07-24 18:10:33.838421] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:25.428 [2024-07-24 18:10:33.838480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122948 ] 00:07:25.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.428 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:25.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.428 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:25.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.428 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:25.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.428 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:25.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.428 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:25.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.428 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:25.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.428 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:25.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.428 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:25.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.428 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:25.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.429 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:25.429 [2024-07-24 18:10:33.927505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.429 [2024-07-24 18:10:33.995828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.687 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.688 18:10:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.622 18:10:35 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.622 00:07:26.622 real 0m1.391s 00:07:26.622 user 0m1.255s 00:07:26.622 sys 0m0.142s 00:07:26.622 18:10:35 accel.accel_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.622 18:10:35 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:26.622 ************************************ 00:07:26.622 END TEST accel_decomp_full 00:07:26.622 ************************************ 00:07:26.880 18:10:35 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.880 18:10:35 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:26.880 18:10:35 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.880 18:10:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.880 ************************************ 00:07:26.880 START TEST accel_decomp_mcore 00:07:26.880 ************************************ 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:26.880 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:26.880 [2024-07-24 18:10:35.313746] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:26.880 [2024-07-24 18:10:35.313808] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2123235 ] 00:07:26.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.880 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:26.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.880 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:26.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.880 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:26.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.880 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:26.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.880 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:26.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.880 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:26.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.880 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:26.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.880 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:26.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.880 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:26.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.880 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:26.880 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.880 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:26.881 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.881 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:26.881 [2024-07-24 18:10:35.403287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.881 [2024-07-24 18:10:35.474946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.881 [2024-07-24 18:10:35.474970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.881 [2024-07-24 18:10:35.475078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.881 [2024-07-24 18:10:35.475080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.139 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.140 18:10:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.074 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.332 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.333 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:28.333 18:10:36 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.333 00:07:28.333 real 0m1.394s 00:07:28.333 user 0m4.602s 00:07:28.333 sys 0m0.157s 00:07:28.333 18:10:36 accel.accel_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.333 18:10:36 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:28.333 ************************************ 00:07:28.333 END TEST accel_decomp_mcore 00:07:28.333 ************************************ 00:07:28.333 18:10:36 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.333 18:10:36 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:28.333 18:10:36 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.333 18:10:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.333 ************************************ 00:07:28.333 START TEST accel_decomp_full_mcore 00:07:28.333 ************************************ 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:28.333 18:10:36 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:28.333 [2024-07-24 18:10:36.785053] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:28.333 [2024-07-24 18:10:36.785103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2123521 ] 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:28.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:28.333 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:28.333 [2024-07-24 18:10:36.876070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.592 [2024-07-24 18:10:36.948764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.592 [2024-07-24 18:10:36.948860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.592 [2024-07-24 18:10:36.948942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.592 [2024-07-24 18:10:36.948945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.592 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.593 18:10:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.967 00:07:29.967 real 0m1.410s 00:07:29.967 user 0m4.658s 00:07:29.967 sys 0m0.158s 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.967 18:10:38 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:29.967 ************************************ 00:07:29.967 END TEST accel_decomp_full_mcore 00:07:29.967 ************************************ 00:07:29.967 18:10:38 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.967 18:10:38 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:29.967 18:10:38 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.967 18:10:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.967 ************************************ 00:07:29.967 START TEST accel_decomp_mthread 00:07:29.967 ************************************ 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:29.967 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:29.967 [2024-07-24 18:10:38.283624] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:29.967 [2024-07-24 18:10:38.283690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2123811 ] 00:07:29.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.967 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:29.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.967 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:29.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.967 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:29.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:29.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.968 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:29.968 [2024-07-24 18:10:38.374691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.968 [2024-07-24 18:10:38.442802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.968 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.969 18:10:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.338 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.338 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.339 00:07:31.339 real 0m1.392s 00:07:31.339 user 0m1.249s 00:07:31.339 sys 0m0.149s 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.339 18:10:39 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:31.339 ************************************ 00:07:31.339 END TEST accel_decomp_mthread 00:07:31.339 ************************************ 00:07:31.339 18:10:39 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.339 18:10:39 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:31.339 18:10:39 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.339 18:10:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.339 ************************************ 00:07:31.339 START TEST accel_decomp_full_mthread 00:07:31.339 ************************************ 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:31.339 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:31.339 [2024-07-24 18:10:39.760270] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:31.339 [2024-07-24 18:10:39.760329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124091 ] 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:31.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.339 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:31.339 [2024-07-24 18:10:39.851326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.339 [2024-07-24 18:10:39.919580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.598 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.599 18:10:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.973 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.974 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:32.974 18:10:41 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.974 00:07:32.974 real 0m1.414s 00:07:32.974 user 0m1.265s 00:07:32.974 sys 0m0.154s 00:07:32.974 18:10:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.974 18:10:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:32.974 ************************************ 00:07:32.974 END TEST accel_decomp_full_mthread 00:07:32.974 ************************************ 00:07:32.974 18:10:41 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:07:32.974 18:10:41 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:07:32.974 18:10:41 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:07:32.974 18:10:41 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:32.974 18:10:41 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2124369 00:07:32.974 18:10:41 accel -- accel/accel.sh@63 -- # waitforlisten 2124369 00:07:32.974 18:10:41 accel -- common/autotest_common.sh@831 -- # '[' -z 2124369 ']' 00:07:32.974 18:10:41 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.974 18:10:41 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:32.974 18:10:41 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.974 18:10:41 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:32.974 18:10:41 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.974 18:10:41 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.974 18:10:41 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.974 18:10:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.974 18:10:41 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.974 18:10:41 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.974 18:10:41 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.974 18:10:41 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:32.974 18:10:41 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:32.974 18:10:41 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:32.974 18:10:41 accel -- accel/accel.sh@41 -- # jq -r . 00:07:32.974 [2024-07-24 18:10:41.242601] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:32.974 [2024-07-24 18:10:41.242655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124369 ] 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:32.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:32.974 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:32.974 [2024-07-24 18:10:41.334898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.974 [2024-07-24 18:10:41.405771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.540 [2024-07-24 18:10:41.901773] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:33.540 18:10:42 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.540 18:10:42 accel -- common/autotest_common.sh@864 -- # return 0 00:07:33.540 18:10:42 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:33.540 18:10:42 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:33.540 18:10:42 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:33.540 18:10:42 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:07:33.540 18:10:42 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:07:33.540 18:10:42 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:07:33.540 18:10:42 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:07:33.541 18:10:42 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.541 18:10:42 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:07:33.541 18:10:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.799 "method": "compressdev_scan_accel_module", 00:07:33.799 18:10:42 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:33.799 18:10:42 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:33.799 18:10:42 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:33.799 18:10:42 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # IFS== 00:07:33.799 18:10:42 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:33.799 18:10:42 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:33.799 18:10:42 accel -- accel/accel.sh@75 -- # killprocess 2124369 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@950 -- # '[' -z 2124369 ']' 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@954 -- # kill -0 2124369 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@955 -- # uname 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2124369 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2124369' 00:07:33.799 killing process with pid 2124369 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@969 -- # kill 2124369 00:07:33.799 18:10:42 accel -- common/autotest_common.sh@974 -- # wait 2124369 00:07:34.057 18:10:42 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:34.057 18:10:42 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:34.057 18:10:42 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:34.057 18:10:42 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.057 18:10:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.315 ************************************ 00:07:34.315 START TEST accel_cdev_comp 00:07:34.315 ************************************ 00:07:34.315 18:10:42 accel.accel_cdev_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:34.315 18:10:42 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:07:34.315 [2024-07-24 18:10:42.716250] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:34.315 [2024-07-24 18:10:42.716307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124660 ] 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:34.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.315 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:34.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:34.316 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:34.316 [2024-07-24 18:10:42.806159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.316 [2024-07-24 18:10:42.875189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.883 [2024-07-24 18:10:43.373336] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:34.884 [2024-07-24 18:10:43.375214] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x10d5920 PMD being used: compress_qat 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 [2024-07-24 18:10:43.378588] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x10da710 PMD being used: compress_qat 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:34.884 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:34.885 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:34.885 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:34.885 18:10:43 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.260 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:36.260 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.260 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:36.261 18:10:44 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:36.261 00:07:36.261 real 0m1.835s 00:07:36.261 user 0m1.449s 00:07:36.261 sys 0m0.392s 00:07:36.261 18:10:44 accel.accel_cdev_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.261 18:10:44 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:07:36.261 ************************************ 00:07:36.261 END TEST accel_cdev_comp 00:07:36.261 ************************************ 00:07:36.261 18:10:44 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:36.261 18:10:44 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:36.261 18:10:44 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.261 18:10:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.261 ************************************ 00:07:36.261 START TEST accel_cdev_decomp 00:07:36.261 ************************************ 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:36.261 18:10:44 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:36.261 [2024-07-24 18:10:44.637703] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:36.261 [2024-07-24 18:10:44.637767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124952 ] 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:36.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.261 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:36.261 [2024-07-24 18:10:44.729305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.261 [2024-07-24 18:10:44.798610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.828 [2024-07-24 18:10:45.308160] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:36.828 [2024-07-24 18:10:45.309957] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1659920 PMD being used: compress_qat 00:07:36.828 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:36.828 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.828 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.828 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.828 [2024-07-24 18:10:45.313428] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x165e710 PMD being used: compress_qat 00:07:36.828 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:36.828 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.828 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.828 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.828 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:36.828 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.828 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:36.829 18:10:45 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:38.205 00:07:38.205 real 0m1.851s 00:07:38.205 user 0m1.439s 00:07:38.205 sys 0m0.412s 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.205 18:10:46 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:38.205 ************************************ 00:07:38.205 END TEST accel_cdev_decomp 00:07:38.205 ************************************ 00:07:38.205 18:10:46 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:38.205 18:10:46 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:38.205 18:10:46 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.205 18:10:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.205 ************************************ 00:07:38.205 START TEST accel_cdev_decomp_full 00:07:38.205 ************************************ 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:38.205 18:10:46 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:38.205 [2024-07-24 18:10:46.570426] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:38.205 [2024-07-24 18:10:46.570484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125269 ] 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:38.205 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.205 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:38.206 [2024-07-24 18:10:46.662110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.206 [2024-07-24 18:10:46.731591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.773 [2024-07-24 18:10:47.243392] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:38.773 [2024-07-24 18:10:47.245221] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x8f3920 PMD being used: compress_qat 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 [2024-07-24 18:10:47.247834] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x8f39c0 PMD being used: compress_qat 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:38.773 18:10:47 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:40.152 00:07:40.152 real 0m1.851s 00:07:40.152 user 0m1.455s 00:07:40.152 sys 0m0.399s 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.152 18:10:48 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:40.152 ************************************ 00:07:40.152 END TEST accel_cdev_decomp_full 00:07:40.152 ************************************ 00:07:40.152 18:10:48 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:40.152 18:10:48 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:40.152 18:10:48 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.152 18:10:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.152 ************************************ 00:07:40.152 START TEST accel_cdev_decomp_mcore 00:07:40.152 ************************************ 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:40.152 18:10:48 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:40.152 [2024-07-24 18:10:48.505593] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:40.152 [2024-07-24 18:10:48.505652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125745 ] 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.152 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:40.152 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.153 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:40.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.153 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:40.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.153 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:40.153 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:40.153 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:40.153 [2024-07-24 18:10:48.597600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.153 [2024-07-24 18:10:48.669697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.153 [2024-07-24 18:10:48.669794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.153 [2024-07-24 18:10:48.669862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.153 [2024-07-24 18:10:48.669865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.720 [2024-07-24 18:10:49.195655] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:40.720 [2024-07-24 18:10:49.197557] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2378bd0 PMD being used: compress_qat 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.720 [2024-07-24 18:10:49.202104] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f212419b8b0 PMD being used: compress_qat 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.720 [2024-07-24 18:10:49.203168] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f211c19b8b0 PMD being used: compress_qat 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:40.720 [2024-07-24 18:10:49.203462] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x237e230 PMD being used: compress_qat 00:07:40.720 [2024-07-24 18:10:49.203645] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f211419b8b0 PMD being used: compress_qat 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.720 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:40.721 18:10:49 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:42.096 00:07:42.096 real 0m1.889s 00:07:42.096 user 0m6.289s 00:07:42.096 sys 0m0.430s 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.096 18:10:50 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:42.096 ************************************ 00:07:42.096 END TEST accel_cdev_decomp_mcore 00:07:42.096 ************************************ 00:07:42.096 18:10:50 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.096 18:10:50 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:42.096 18:10:50 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.096 18:10:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.096 ************************************ 00:07:42.096 START TEST accel_cdev_decomp_full_mcore 00:07:42.096 ************************************ 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:42.096 18:10:50 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:42.096 [2024-07-24 18:10:50.476084] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:42.096 [2024-07-24 18:10:50.476135] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126081 ] 00:07:42.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.096 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:42.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.096 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:42.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.096 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:42.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.096 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:42.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:42.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.097 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:42.097 [2024-07-24 18:10:50.569792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.097 [2024-07-24 18:10:50.642674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.097 [2024-07-24 18:10:50.642769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.097 [2024-07-24 18:10:50.642858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.097 [2024-07-24 18:10:50.642860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.663 [2024-07-24 18:10:51.169163] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:42.663 [2024-07-24 18:10:51.171052] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1eddbd0 PMD being used: compress_qat 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.663 [2024-07-24 18:10:51.174730] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f75fc19b8b0 PMD being used: compress_qat 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.663 [2024-07-24 18:10:51.175617] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f75f419b8b0 PMD being used: compress_qat 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:42.663 [2024-07-24 18:10:51.176059] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1ee3420 PMD being used: compress_qat 00:07:42.663 [2024-07-24 18:10:51.176290] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f75ec19b8b0 PMD being used: compress_qat 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.663 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:42.664 18:10:51 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:44.041 00:07:44.041 real 0m1.890s 00:07:44.041 user 0m6.301s 00:07:44.041 sys 0m0.414s 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.041 18:10:52 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:44.041 ************************************ 00:07:44.041 END TEST accel_cdev_decomp_full_mcore 00:07:44.041 ************************************ 00:07:44.041 18:10:52 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:44.041 18:10:52 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:07:44.041 18:10:52 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.041 18:10:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.041 ************************************ 00:07:44.041 START TEST accel_cdev_decomp_mthread 00:07:44.041 ************************************ 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:44.041 18:10:52 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:44.041 [2024-07-24 18:10:52.452317] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:44.041 [2024-07-24 18:10:52.452373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126378 ] 00:07:44.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.041 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:44.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.041 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:44.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.041 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:44.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.041 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:44.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.041 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:44.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.041 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:44.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.042 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:44.042 [2024-07-24 18:10:52.542479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.042 [2024-07-24 18:10:52.611767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.609 [2024-07-24 18:10:53.115587] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:44.609 [2024-07-24 18:10:53.117408] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x21a9920 PMD being used: compress_qat 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.609 [2024-07-24 18:10:53.121490] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x21aeb70 PMD being used: compress_qat 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:44.609 [2024-07-24 18:10:53.123263] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x22d1950 PMD being used: compress_qat 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.609 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:44.610 18:10:53 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:46.014 00:07:46.014 real 0m1.848s 00:07:46.014 user 0m1.442s 00:07:46.014 sys 0m0.413s 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.014 18:10:54 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:46.014 ************************************ 00:07:46.014 END TEST accel_cdev_decomp_mthread 00:07:46.014 ************************************ 00:07:46.014 18:10:54 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:46.014 18:10:54 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:46.014 18:10:54 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.014 18:10:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.014 ************************************ 00:07:46.014 START TEST accel_cdev_decomp_full_mthread 00:07:46.014 ************************************ 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:46.014 18:10:54 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:46.014 [2024-07-24 18:10:54.384187] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:46.014 [2024-07-24 18:10:54.384247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126689 ] 00:07:46.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.014 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:46.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.014 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:46.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.014 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:46.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.014 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:46.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.015 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:46.015 [2024-07-24 18:10:54.476726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.015 [2024-07-24 18:10:54.546918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.583 [2024-07-24 18:10:55.054333] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:46.583 [2024-07-24 18:10:55.056107] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc1d920 PMD being used: compress_qat 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.583 [2024-07-24 18:10:55.059318] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc1d9c0 PMD being used: compress_qat 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:46.583 [2024-07-24 18:10:55.061252] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xd45570 PMD being used: compress_qat 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:46.583 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:46.584 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.584 18:10:55 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:47.960 00:07:47.960 real 0m1.857s 00:07:47.960 user 0m1.445s 00:07:47.960 sys 0m0.417s 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.960 18:10:56 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:47.960 ************************************ 00:07:47.960 END TEST accel_cdev_decomp_full_mthread 00:07:47.960 ************************************ 00:07:47.960 18:10:56 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:07:47.960 18:10:56 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:47.960 18:10:56 accel -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:47.960 18:10:56 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:47.960 18:10:56 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.960 18:10:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.960 18:10:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.960 18:10:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.960 18:10:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.960 18:10:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.960 18:10:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.960 18:10:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:47.960 18:10:56 accel -- accel/accel.sh@41 -- # jq -r . 00:07:47.960 ************************************ 00:07:47.960 START TEST accel_dif_functional_tests 00:07:47.960 ************************************ 00:07:47.960 18:10:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:47.960 [2024-07-24 18:10:56.345076] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:47.960 [2024-07-24 18:10:56.345127] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127153 ] 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:47.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.960 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:47.960 [2024-07-24 18:10:56.437995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.960 [2024-07-24 18:10:56.510000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.960 [2024-07-24 18:10:56.510096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.960 [2024-07-24 18:10:56.510097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.219 00:07:48.219 00:07:48.219 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.219 http://cunit.sourceforge.net/ 00:07:48.219 00:07:48.219 00:07:48.219 Suite: accel_dif 00:07:48.219 Test: verify: DIF generated, GUARD check ...passed 00:07:48.219 Test: verify: DIF generated, APPTAG check ...passed 00:07:48.219 Test: verify: DIF generated, REFTAG check ...passed 00:07:48.219 Test: verify: DIF not generated, GUARD check ...[2024-07-24 18:10:56.587736] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:48.219 passed 00:07:48.219 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 18:10:56.587800] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:48.219 passed 00:07:48.219 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 18:10:56.587822] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:48.219 passed 00:07:48.219 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:48.219 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 18:10:56.587870] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:48.219 passed 00:07:48.219 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:48.219 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:48.220 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:48.220 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 18:10:56.587970] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:48.220 passed 00:07:48.220 Test: verify copy: DIF generated, GUARD check ...passed 00:07:48.220 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:48.220 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:48.220 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 18:10:56.588081] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:48.220 passed 00:07:48.220 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 18:10:56.588106] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:48.220 passed 00:07:48.220 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 18:10:56.588129] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:48.220 passed 00:07:48.220 Test: generate copy: DIF generated, GUARD check ...passed 00:07:48.220 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:48.220 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:48.220 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:48.220 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:48.220 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:48.220 Test: generate copy: iovecs-len validate ...[2024-07-24 18:10:56.588296] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:48.220 passed 00:07:48.220 Test: generate copy: buffer alignment validate ...passed 00:07:48.220 00:07:48.220 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.220 suites 1 1 n/a 0 0 00:07:48.220 tests 26 26 26 0 0 00:07:48.220 asserts 115 115 115 0 n/a 00:07:48.220 00:07:48.220 Elapsed time = 0.002 seconds 00:07:48.220 00:07:48.220 real 0m0.463s 00:07:48.220 user 0m0.604s 00:07:48.220 sys 0m0.182s 00:07:48.220 18:10:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.220 18:10:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:48.220 ************************************ 00:07:48.220 END TEST accel_dif_functional_tests 00:07:48.220 ************************************ 00:07:48.220 00:07:48.220 real 0m47.760s 00:07:48.220 user 0m56.424s 00:07:48.220 sys 0m9.546s 00:07:48.220 18:10:56 accel -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.220 18:10:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.220 ************************************ 00:07:48.220 END TEST accel 00:07:48.220 ************************************ 00:07:48.478 18:10:56 -- spdk/autotest.sh@186 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:48.478 18:10:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.478 18:10:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.478 18:10:56 -- common/autotest_common.sh@10 -- # set +x 00:07:48.478 ************************************ 00:07:48.478 START TEST accel_rpc 00:07:48.478 ************************************ 00:07:48.478 18:10:56 accel_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:48.478 * Looking for test storage... 00:07:48.478 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:07:48.478 18:10:56 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:48.478 18:10:56 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2127280 00:07:48.478 18:10:56 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2127280 00:07:48.478 18:10:56 accel_rpc -- common/autotest_common.sh@831 -- # '[' -z 2127280 ']' 00:07:48.478 18:10:56 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:48.478 18:10:56 accel_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.478 18:10:56 accel_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.478 18:10:56 accel_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.478 18:10:56 accel_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.478 18:10:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.478 [2024-07-24 18:10:57.035204] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:48.478 [2024-07-24 18:10:57.035255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127280 ] 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.737 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:48.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.738 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:48.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.738 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:48.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.738 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:48.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.738 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:48.738 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:48.738 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:48.738 [2024-07-24 18:10:57.129382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.738 [2024-07-24 18:10:57.203575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.305 18:10:57 accel_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.305 18:10:57 accel_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:49.305 18:10:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:49.305 18:10:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:49.305 18:10:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:49.305 18:10:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:49.305 18:10:57 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:49.305 18:10:57 accel_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.305 18:10:57 accel_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.305 18:10:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.305 ************************************ 00:07:49.305 START TEST accel_assign_opcode 00:07:49.305 ************************************ 00:07:49.305 18:10:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # accel_assign_opcode_test_suite 00:07:49.305 18:10:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:49.305 18:10:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.305 18:10:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:49.305 [2024-07-24 18:10:57.865554] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:49.305 18:10:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.305 18:10:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:49.305 18:10:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.305 18:10:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:49.305 [2024-07-24 18:10:57.873568] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:49.305 18:10:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.305 18:10:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:49.305 18:10:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.305 18:10:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:49.564 18:10:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.564 18:10:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:49.564 18:10:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.564 18:10:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:49.564 18:10:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:49.564 18:10:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:49.564 18:10:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.564 software 00:07:49.564 00:07:49.564 real 0m0.244s 00:07:49.564 user 0m0.037s 00:07:49.564 sys 0m0.013s 00:07:49.564 18:10:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.564 18:10:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:49.564 ************************************ 00:07:49.564 END TEST accel_assign_opcode 00:07:49.564 ************************************ 00:07:49.564 18:10:58 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2127280 00:07:49.564 18:10:58 accel_rpc -- common/autotest_common.sh@950 -- # '[' -z 2127280 ']' 00:07:49.564 18:10:58 accel_rpc -- common/autotest_common.sh@954 -- # kill -0 2127280 00:07:49.564 18:10:58 accel_rpc -- common/autotest_common.sh@955 -- # uname 00:07:49.564 18:10:58 accel_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.564 18:10:58 accel_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2127280 00:07:49.822 18:10:58 accel_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.822 18:10:58 accel_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.822 18:10:58 accel_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2127280' 00:07:49.822 killing process with pid 2127280 00:07:49.822 18:10:58 accel_rpc -- common/autotest_common.sh@969 -- # kill 2127280 00:07:49.822 18:10:58 accel_rpc -- common/autotest_common.sh@974 -- # wait 2127280 00:07:50.081 00:07:50.081 real 0m1.626s 00:07:50.081 user 0m1.603s 00:07:50.081 sys 0m0.520s 00:07:50.081 18:10:58 accel_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.081 18:10:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.081 ************************************ 00:07:50.081 END TEST accel_rpc 00:07:50.081 ************************************ 00:07:50.081 18:10:58 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:50.081 18:10:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.081 18:10:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.081 18:10:58 -- common/autotest_common.sh@10 -- # set +x 00:07:50.081 ************************************ 00:07:50.082 START TEST app_cmdline 00:07:50.082 ************************************ 00:07:50.082 18:10:58 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:50.340 * Looking for test storage... 00:07:50.340 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:50.340 18:10:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:50.340 18:10:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2127623 00:07:50.340 18:10:58 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:50.340 18:10:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2127623 00:07:50.340 18:10:58 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2127623 ']' 00:07:50.340 18:10:58 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.340 18:10:58 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.340 18:10:58 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.340 18:10:58 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.340 18:10:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:50.340 [2024-07-24 18:10:58.758922] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:50.340 [2024-07-24 18:10:58.758976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127623 ] 00:07:50.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.340 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:50.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:50.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.341 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:50.341 [2024-07-24 18:10:58.852913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.341 [2024-07-24 18:10:58.923238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:51.276 18:10:59 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:51.276 { 00:07:51.276 "version": "SPDK v24.09-pre git sha1 23a081919", 00:07:51.276 "fields": { 00:07:51.276 "major": 24, 00:07:51.276 "minor": 9, 00:07:51.276 "patch": 0, 00:07:51.276 "suffix": "-pre", 00:07:51.276 "commit": "23a081919" 00:07:51.276 } 00:07:51.276 } 00:07:51.276 18:10:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:51.276 18:10:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:51.276 18:10:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:51.276 18:10:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:51.276 18:10:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:51.276 18:10:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:51.276 18:10:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.276 18:10:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:51.276 18:10:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:51.276 18:10:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:07:51.276 18:10:59 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:51.535 request: 00:07:51.535 { 00:07:51.535 "method": "env_dpdk_get_mem_stats", 00:07:51.535 "req_id": 1 00:07:51.535 } 00:07:51.535 Got JSON-RPC error response 00:07:51.535 response: 00:07:51.535 { 00:07:51.535 "code": -32601, 00:07:51.535 "message": "Method not found" 00:07:51.535 } 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:51.535 18:10:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2127623 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2127623 ']' 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2127623 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2127623 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2127623' 00:07:51.535 killing process with pid 2127623 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@969 -- # kill 2127623 00:07:51.535 18:10:59 app_cmdline -- common/autotest_common.sh@974 -- # wait 2127623 00:07:51.794 00:07:51.794 real 0m1.718s 00:07:51.794 user 0m1.939s 00:07:51.794 sys 0m0.528s 00:07:51.794 18:11:00 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.794 18:11:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:51.794 ************************************ 00:07:51.794 END TEST app_cmdline 00:07:51.794 ************************************ 00:07:51.794 18:11:00 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:51.794 18:11:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.794 18:11:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.794 18:11:00 -- common/autotest_common.sh@10 -- # set +x 00:07:51.794 ************************************ 00:07:51.794 START TEST version 00:07:51.794 ************************************ 00:07:51.794 18:11:00 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:52.053 * Looking for test storage... 00:07:52.054 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:52.054 18:11:00 version -- app/version.sh@17 -- # get_header_version major 00:07:52.054 18:11:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:52.054 18:11:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:52.054 18:11:00 version -- app/version.sh@14 -- # cut -f2 00:07:52.054 18:11:00 version -- app/version.sh@17 -- # major=24 00:07:52.054 18:11:00 version -- app/version.sh@18 -- # get_header_version minor 00:07:52.054 18:11:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:52.054 18:11:00 version -- app/version.sh@14 -- # cut -f2 00:07:52.054 18:11:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:52.054 18:11:00 version -- app/version.sh@18 -- # minor=9 00:07:52.054 18:11:00 version -- app/version.sh@19 -- # get_header_version patch 00:07:52.054 18:11:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:52.054 18:11:00 version -- app/version.sh@14 -- # cut -f2 00:07:52.054 18:11:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:52.054 18:11:00 version -- app/version.sh@19 -- # patch=0 00:07:52.054 18:11:00 version -- app/version.sh@20 -- # get_header_version suffix 00:07:52.054 18:11:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:52.054 18:11:00 version -- app/version.sh@14 -- # cut -f2 00:07:52.054 18:11:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:52.054 18:11:00 version -- app/version.sh@20 -- # suffix=-pre 00:07:52.054 18:11:00 version -- app/version.sh@22 -- # version=24.9 00:07:52.054 18:11:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:52.054 18:11:00 version -- app/version.sh@28 -- # version=24.9rc0 00:07:52.054 18:11:00 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:07:52.054 18:11:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:52.054 18:11:00 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:52.054 18:11:00 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:52.054 00:07:52.054 real 0m0.180s 00:07:52.054 user 0m0.083s 00:07:52.054 sys 0m0.143s 00:07:52.054 18:11:00 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.054 18:11:00 version -- common/autotest_common.sh@10 -- # set +x 00:07:52.054 ************************************ 00:07:52.054 END TEST version 00:07:52.054 ************************************ 00:07:52.054 18:11:00 -- spdk/autotest.sh@192 -- # '[' 1 -eq 1 ']' 00:07:52.054 18:11:00 -- spdk/autotest.sh@193 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:52.054 18:11:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.054 18:11:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.054 18:11:00 -- common/autotest_common.sh@10 -- # set +x 00:07:52.054 ************************************ 00:07:52.054 START TEST blockdev_general 00:07:52.054 ************************************ 00:07:52.054 18:11:00 blockdev_general -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:52.313 * Looking for test storage... 00:07:52.313 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:52.313 18:11:00 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2128139 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:07:52.313 18:11:00 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 2128139 00:07:52.313 18:11:00 blockdev_general -- common/autotest_common.sh@831 -- # '[' -z 2128139 ']' 00:07:52.313 18:11:00 blockdev_general -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.313 18:11:00 blockdev_general -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.313 18:11:00 blockdev_general -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.313 18:11:00 blockdev_general -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.313 18:11:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:52.313 [2024-07-24 18:11:00.827838] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:52.313 [2024-07-24 18:11:00.827887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2128139 ] 00:07:52.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.313 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:52.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.313 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:52.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.313 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:52.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.313 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:52.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.313 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:52.313 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.313 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:52.314 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:52.314 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:52.572 [2024-07-24 18:11:00.920818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.572 [2024-07-24 18:11:00.994592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.139 18:11:01 blockdev_general -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.139 18:11:01 blockdev_general -- common/autotest_common.sh@864 -- # return 0 00:07:53.139 18:11:01 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:53.139 18:11:01 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:07:53.139 18:11:01 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:07:53.139 18:11:01 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.139 18:11:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:53.397 [2024-07-24 18:11:01.832721] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:53.397 [2024-07-24 18:11:01.832765] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:53.397 00:07:53.398 [2024-07-24 18:11:01.840722] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:53.398 [2024-07-24 18:11:01.840740] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:53.398 00:07:53.398 Malloc0 00:07:53.398 Malloc1 00:07:53.398 Malloc2 00:07:53.398 Malloc3 00:07:53.398 Malloc4 00:07:53.398 Malloc5 00:07:53.398 Malloc6 00:07:53.398 Malloc7 00:07:53.398 Malloc8 00:07:53.398 Malloc9 00:07:53.398 [2024-07-24 18:11:01.967251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:53.398 [2024-07-24 18:11:01.967289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.398 [2024-07-24 18:11:01.967301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa6f6f0 00:07:53.398 [2024-07-24 18:11:01.967309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.398 [2024-07-24 18:11:01.968208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.398 [2024-07-24 18:11:01.968230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:53.398 TestPT 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.656 18:11:02 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:07:53.656 5000+0 records in 00:07:53.656 5000+0 records out 00:07:53.656 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0260589 s, 393 MB/s 00:07:53.656 18:11:02 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:53.656 AIO0 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.656 18:11:02 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.656 18:11:02 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:07:53.656 18:11:02 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.656 18:11:02 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.656 18:11:02 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.656 18:11:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:53.657 18:11:02 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.657 18:11:02 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:53.657 18:11:02 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:53.657 18:11:02 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:53.657 18:11:02 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.657 18:11:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:53.916 18:11:02 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.916 18:11:02 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:53.916 18:11:02 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:53.918 18:11:02 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "841b16e4-8c22-4705-9bd9-13cb5ca8a7c2"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "841b16e4-8c22-4705-9bd9-13cb5ca8a7c2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f51d279a-8a79-5c8e-900a-7d1c3d101218"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f51d279a-8a79-5c8e-900a-7d1c3d101218",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "14d613a5-703e-5370-b8e9-52f8e9e81ba7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "14d613a5-703e-5370-b8e9-52f8e9e81ba7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "af5ee2cf-779e-5209-b79f-02cec017a6da"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "af5ee2cf-779e-5209-b79f-02cec017a6da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "bb9b2d57-f01e-5b8a-8945-b4913fe37b43"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bb9b2d57-f01e-5b8a-8945-b4913fe37b43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "aaddff92-fc49-5bcd-bb85-3c02467ec197"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "aaddff92-fc49-5bcd-bb85-3c02467ec197",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ff9ab044-ec24-583e-9660-31db1f1f7d3e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ff9ab044-ec24-583e-9660-31db1f1f7d3e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "4d4b9c62-71cf-54d2-bd34-c290a3748ed8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4d4b9c62-71cf-54d2-bd34-c290a3748ed8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "a0fd5f14-7b77-5cd7-9604-61beeff0676a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a0fd5f14-7b77-5cd7-9604-61beeff0676a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "fd939ef1-8104-5a32-b700-cd0affb005fa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fd939ef1-8104-5a32-b700-cd0affb005fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "341cb4ea-f0e0-53b4-974f-5fb24c1dbaca"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "341cb4ea-f0e0-53b4-974f-5fb24c1dbaca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "9ac485a6-3d88-5221-8275-6883de61177f"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9ac485a6-3d88-5221-8275-6883de61177f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d936d288-9915-44c7-90e5-adfeb3040148"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d936d288-9915-44c7-90e5-adfeb3040148",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d936d288-9915-44c7-90e5-adfeb3040148",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "69ac6256-08ae-412f-8ae2-923217f9c5c0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "17ceece5-b5d7-49b6-8e16-db857d135312",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "83c5dee1-13a7-47c6-91e2-e253e3eb64f3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "83c5dee1-13a7-47c6-91e2-e253e3eb64f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "83c5dee1-13a7-47c6-91e2-e253e3eb64f3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "44b7af4f-70a8-4a08-8b98-3c246caa1399",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "5674e43b-8541-4b08-a04b-8917e5b165f4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "3e83c7cd-cc44-405c-bebf-e9ca971bccd4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3e83c7cd-cc44-405c-bebf-e9ca971bccd4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "3e83c7cd-cc44-405c-bebf-e9ca971bccd4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2f2099be-c4d0-49d9-b013-e10627efa4bd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "bb493049-2970-48d0-99bf-9413f8a6cb3f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "66df4caf-2d46-4c1e-b6a8-2bc21155ce37"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "66df4caf-2d46-4c1e-b6a8-2bc21155ce37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:07:53.918 18:11:02 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:53.918 18:11:02 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:07:53.918 18:11:02 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:53.918 18:11:02 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 2128139 00:07:53.918 18:11:02 blockdev_general -- common/autotest_common.sh@950 -- # '[' -z 2128139 ']' 00:07:53.918 18:11:02 blockdev_general -- common/autotest_common.sh@954 -- # kill -0 2128139 00:07:53.918 18:11:02 blockdev_general -- common/autotest_common.sh@955 -- # uname 00:07:53.918 18:11:02 blockdev_general -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.918 18:11:02 blockdev_general -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2128139 00:07:53.918 18:11:02 blockdev_general -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.918 18:11:02 blockdev_general -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.918 18:11:02 blockdev_general -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2128139' 00:07:53.918 killing process with pid 2128139 00:07:53.918 18:11:02 blockdev_general -- common/autotest_common.sh@969 -- # kill 2128139 00:07:53.918 18:11:02 blockdev_general -- common/autotest_common.sh@974 -- # wait 2128139 00:07:54.485 18:11:02 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:54.485 18:11:02 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:54.486 18:11:02 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:54.486 18:11:02 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.486 18:11:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:54.486 ************************************ 00:07:54.486 START TEST bdev_hello_world 00:07:54.486 ************************************ 00:07:54.486 18:11:02 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:54.486 [2024-07-24 18:11:02.896372] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:54.486 [2024-07-24 18:11:02.896413] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2128561 ] 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:54.486 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.486 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:54.486 [2024-07-24 18:11:02.984612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.486 [2024-07-24 18:11:03.053126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.744 [2024-07-24 18:11:03.194671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:54.744 [2024-07-24 18:11:03.194714] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:54.744 [2024-07-24 18:11:03.194724] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:54.744 [2024-07-24 18:11:03.202663] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:54.744 [2024-07-24 18:11:03.202683] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:54.744 [2024-07-24 18:11:03.210675] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:54.744 [2024-07-24 18:11:03.210692] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:54.744 [2024-07-24 18:11:03.278386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:54.744 [2024-07-24 18:11:03.278424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.745 [2024-07-24 18:11:03.278434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12acac0 00:07:54.745 [2024-07-24 18:11:03.278442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.745 [2024-07-24 18:11:03.279521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.745 [2024-07-24 18:11:03.279544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:55.002 [2024-07-24 18:11:03.410807] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:55.002 [2024-07-24 18:11:03.410843] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:07:55.002 [2024-07-24 18:11:03.410866] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:55.002 [2024-07-24 18:11:03.410898] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:55.002 [2024-07-24 18:11:03.410930] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:55.002 [2024-07-24 18:11:03.410943] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:55.002 [2024-07-24 18:11:03.410969] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:55.002 00:07:55.002 [2024-07-24 18:11:03.410986] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:55.260 00:07:55.260 real 0m0.813s 00:07:55.260 user 0m0.527s 00:07:55.260 sys 0m0.249s 00:07:55.260 18:11:03 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.260 18:11:03 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:55.260 ************************************ 00:07:55.260 END TEST bdev_hello_world 00:07:55.260 ************************************ 00:07:55.260 18:11:03 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:55.260 18:11:03 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:55.260 18:11:03 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.260 18:11:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:55.260 ************************************ 00:07:55.260 START TEST bdev_bounds 00:07:55.260 ************************************ 00:07:55.260 18:11:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:55.260 18:11:03 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=2128636 00:07:55.260 18:11:03 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:55.260 18:11:03 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:55.260 18:11:03 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 2128636' 00:07:55.260 Process bdevio pid: 2128636 00:07:55.260 18:11:03 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 2128636 00:07:55.260 18:11:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 2128636 ']' 00:07:55.260 18:11:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.260 18:11:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.260 18:11:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.260 18:11:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.260 18:11:03 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:55.260 [2024-07-24 18:11:03.793686] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:55.260 [2024-07-24 18:11:03.793730] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2128636 ] 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:55.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.260 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:55.548 [2024-07-24 18:11:03.887860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.548 [2024-07-24 18:11:03.964081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.548 [2024-07-24 18:11:03.964179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.548 [2024-07-24 18:11:03.964182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.548 [2024-07-24 18:11:04.103529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:55.548 [2024-07-24 18:11:04.103577] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:55.548 [2024-07-24 18:11:04.103586] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:55.548 [2024-07-24 18:11:04.111544] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:55.548 [2024-07-24 18:11:04.111566] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:55.548 [2024-07-24 18:11:04.119559] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:55.548 [2024-07-24 18:11:04.119577] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:55.808 [2024-07-24 18:11:04.187937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:55.808 [2024-07-24 18:11:04.187977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.808 [2024-07-24 18:11:04.188004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6d6c0 00:07:55.808 [2024-07-24 18:11:04.188012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.808 [2024-07-24 18:11:04.188998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.808 [2024-07-24 18:11:04.189021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:56.065 18:11:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.065 18:11:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:56.065 18:11:04 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:56.325 I/O targets: 00:07:56.325 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:07:56.325 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:07:56.325 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:07:56.325 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:07:56.325 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:07:56.325 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:07:56.325 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:07:56.325 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:07:56.325 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:07:56.325 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:07:56.325 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:07:56.325 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:07:56.325 raid0: 131072 blocks of 512 bytes (64 MiB) 00:07:56.325 concat0: 131072 blocks of 512 bytes (64 MiB) 00:07:56.325 raid1: 65536 blocks of 512 bytes (32 MiB) 00:07:56.325 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:07:56.325 00:07:56.325 00:07:56.325 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.325 http://cunit.sourceforge.net/ 00:07:56.325 00:07:56.325 00:07:56.325 Suite: bdevio tests on: AIO0 00:07:56.325 Test: blockdev write read block ...passed 00:07:56.325 Test: blockdev write zeroes read block ...passed 00:07:56.325 Test: blockdev write zeroes read no split ...passed 00:07:56.325 Test: blockdev write zeroes read split ...passed 00:07:56.325 Test: blockdev write zeroes read split partial ...passed 00:07:56.325 Test: blockdev reset ...passed 00:07:56.325 Test: blockdev write read 8 blocks ...passed 00:07:56.325 Test: blockdev write read size > 128k ...passed 00:07:56.325 Test: blockdev write read invalid size ...passed 00:07:56.325 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.325 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.325 Test: blockdev write read max offset ...passed 00:07:56.326 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.326 Test: blockdev writev readv 8 blocks ...passed 00:07:56.326 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.326 Test: blockdev writev readv block ...passed 00:07:56.326 Test: blockdev writev readv size > 128k ...passed 00:07:56.326 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.326 Test: blockdev comparev and writev ...passed 00:07:56.326 Test: blockdev nvme passthru rw ...passed 00:07:56.326 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.326 Test: blockdev nvme admin passthru ...passed 00:07:56.326 Test: blockdev copy ...passed 00:07:56.326 Suite: bdevio tests on: raid1 00:07:56.326 Test: blockdev write read block ...passed 00:07:56.326 Test: blockdev write zeroes read block ...passed 00:07:56.326 Test: blockdev write zeroes read no split ...passed 00:07:56.326 Test: blockdev write zeroes read split ...passed 00:07:56.326 Test: blockdev write zeroes read split partial ...passed 00:07:56.326 Test: blockdev reset ...passed 00:07:56.326 Test: blockdev write read 8 blocks ...passed 00:07:56.326 Test: blockdev write read size > 128k ...passed 00:07:56.326 Test: blockdev write read invalid size ...passed 00:07:56.326 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.326 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.326 Test: blockdev write read max offset ...passed 00:07:56.326 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.326 Test: blockdev writev readv 8 blocks ...passed 00:07:56.326 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.326 Test: blockdev writev readv block ...passed 00:07:56.326 Test: blockdev writev readv size > 128k ...passed 00:07:56.326 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.326 Test: blockdev comparev and writev ...passed 00:07:56.326 Test: blockdev nvme passthru rw ...passed 00:07:56.326 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.326 Test: blockdev nvme admin passthru ...passed 00:07:56.326 Test: blockdev copy ...passed 00:07:56.326 Suite: bdevio tests on: concat0 00:07:56.326 Test: blockdev write read block ...passed 00:07:56.326 Test: blockdev write zeroes read block ...passed 00:07:56.326 Test: blockdev write zeroes read no split ...passed 00:07:56.326 Test: blockdev write zeroes read split ...passed 00:07:56.326 Test: blockdev write zeroes read split partial ...passed 00:07:56.326 Test: blockdev reset ...passed 00:07:56.326 Test: blockdev write read 8 blocks ...passed 00:07:56.326 Test: blockdev write read size > 128k ...passed 00:07:56.326 Test: blockdev write read invalid size ...passed 00:07:56.326 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.326 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.326 Test: blockdev write read max offset ...passed 00:07:56.326 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.326 Test: blockdev writev readv 8 blocks ...passed 00:07:56.326 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.326 Test: blockdev writev readv block ...passed 00:07:56.326 Test: blockdev writev readv size > 128k ...passed 00:07:56.326 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.326 Test: blockdev comparev and writev ...passed 00:07:56.326 Test: blockdev nvme passthru rw ...passed 00:07:56.326 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.326 Test: blockdev nvme admin passthru ...passed 00:07:56.326 Test: blockdev copy ...passed 00:07:56.326 Suite: bdevio tests on: raid0 00:07:56.326 Test: blockdev write read block ...passed 00:07:56.326 Test: blockdev write zeroes read block ...passed 00:07:56.326 Test: blockdev write zeroes read no split ...passed 00:07:56.326 Test: blockdev write zeroes read split ...passed 00:07:56.326 Test: blockdev write zeroes read split partial ...passed 00:07:56.326 Test: blockdev reset ...passed 00:07:56.326 Test: blockdev write read 8 blocks ...passed 00:07:56.326 Test: blockdev write read size > 128k ...passed 00:07:56.326 Test: blockdev write read invalid size ...passed 00:07:56.326 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.326 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.326 Test: blockdev write read max offset ...passed 00:07:56.326 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.326 Test: blockdev writev readv 8 blocks ...passed 00:07:56.326 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.326 Test: blockdev writev readv block ...passed 00:07:56.326 Test: blockdev writev readv size > 128k ...passed 00:07:56.326 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.326 Test: blockdev comparev and writev ...passed 00:07:56.326 Test: blockdev nvme passthru rw ...passed 00:07:56.326 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.326 Test: blockdev nvme admin passthru ...passed 00:07:56.326 Test: blockdev copy ...passed 00:07:56.326 Suite: bdevio tests on: TestPT 00:07:56.326 Test: blockdev write read block ...passed 00:07:56.326 Test: blockdev write zeroes read block ...passed 00:07:56.326 Test: blockdev write zeroes read no split ...passed 00:07:56.326 Test: blockdev write zeroes read split ...passed 00:07:56.326 Test: blockdev write zeroes read split partial ...passed 00:07:56.326 Test: blockdev reset ...passed 00:07:56.326 Test: blockdev write read 8 blocks ...passed 00:07:56.326 Test: blockdev write read size > 128k ...passed 00:07:56.326 Test: blockdev write read invalid size ...passed 00:07:56.326 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.326 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.326 Test: blockdev write read max offset ...passed 00:07:56.326 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.326 Test: blockdev writev readv 8 blocks ...passed 00:07:56.326 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.326 Test: blockdev writev readv block ...passed 00:07:56.326 Test: blockdev writev readv size > 128k ...passed 00:07:56.326 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.326 Test: blockdev comparev and writev ...passed 00:07:56.326 Test: blockdev nvme passthru rw ...passed 00:07:56.326 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.326 Test: blockdev nvme admin passthru ...passed 00:07:56.326 Test: blockdev copy ...passed 00:07:56.326 Suite: bdevio tests on: Malloc2p7 00:07:56.326 Test: blockdev write read block ...passed 00:07:56.326 Test: blockdev write zeroes read block ...passed 00:07:56.326 Test: blockdev write zeroes read no split ...passed 00:07:56.326 Test: blockdev write zeroes read split ...passed 00:07:56.326 Test: blockdev write zeroes read split partial ...passed 00:07:56.326 Test: blockdev reset ...passed 00:07:56.326 Test: blockdev write read 8 blocks ...passed 00:07:56.326 Test: blockdev write read size > 128k ...passed 00:07:56.326 Test: blockdev write read invalid size ...passed 00:07:56.326 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.326 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.326 Test: blockdev write read max offset ...passed 00:07:56.326 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.326 Test: blockdev writev readv 8 blocks ...passed 00:07:56.326 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.326 Test: blockdev writev readv block ...passed 00:07:56.326 Test: blockdev writev readv size > 128k ...passed 00:07:56.326 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.326 Test: blockdev comparev and writev ...passed 00:07:56.326 Test: blockdev nvme passthru rw ...passed 00:07:56.326 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.326 Test: blockdev nvme admin passthru ...passed 00:07:56.326 Test: blockdev copy ...passed 00:07:56.326 Suite: bdevio tests on: Malloc2p6 00:07:56.326 Test: blockdev write read block ...passed 00:07:56.326 Test: blockdev write zeroes read block ...passed 00:07:56.326 Test: blockdev write zeroes read no split ...passed 00:07:56.326 Test: blockdev write zeroes read split ...passed 00:07:56.326 Test: blockdev write zeroes read split partial ...passed 00:07:56.326 Test: blockdev reset ...passed 00:07:56.326 Test: blockdev write read 8 blocks ...passed 00:07:56.326 Test: blockdev write read size > 128k ...passed 00:07:56.326 Test: blockdev write read invalid size ...passed 00:07:56.326 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.326 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.326 Test: blockdev write read max offset ...passed 00:07:56.326 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.326 Test: blockdev writev readv 8 blocks ...passed 00:07:56.326 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.326 Test: blockdev writev readv block ...passed 00:07:56.326 Test: blockdev writev readv size > 128k ...passed 00:07:56.326 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.326 Test: blockdev comparev and writev ...passed 00:07:56.326 Test: blockdev nvme passthru rw ...passed 00:07:56.326 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.326 Test: blockdev nvme admin passthru ...passed 00:07:56.326 Test: blockdev copy ...passed 00:07:56.326 Suite: bdevio tests on: Malloc2p5 00:07:56.326 Test: blockdev write read block ...passed 00:07:56.326 Test: blockdev write zeroes read block ...passed 00:07:56.326 Test: blockdev write zeroes read no split ...passed 00:07:56.326 Test: blockdev write zeroes read split ...passed 00:07:56.326 Test: blockdev write zeroes read split partial ...passed 00:07:56.326 Test: blockdev reset ...passed 00:07:56.327 Test: blockdev write read 8 blocks ...passed 00:07:56.327 Test: blockdev write read size > 128k ...passed 00:07:56.327 Test: blockdev write read invalid size ...passed 00:07:56.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.327 Test: blockdev write read max offset ...passed 00:07:56.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.327 Test: blockdev writev readv 8 blocks ...passed 00:07:56.327 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.327 Test: blockdev writev readv block ...passed 00:07:56.327 Test: blockdev writev readv size > 128k ...passed 00:07:56.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.327 Test: blockdev comparev and writev ...passed 00:07:56.327 Test: blockdev nvme passthru rw ...passed 00:07:56.327 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.327 Test: blockdev nvme admin passthru ...passed 00:07:56.327 Test: blockdev copy ...passed 00:07:56.327 Suite: bdevio tests on: Malloc2p4 00:07:56.327 Test: blockdev write read block ...passed 00:07:56.327 Test: blockdev write zeroes read block ...passed 00:07:56.327 Test: blockdev write zeroes read no split ...passed 00:07:56.327 Test: blockdev write zeroes read split ...passed 00:07:56.327 Test: blockdev write zeroes read split partial ...passed 00:07:56.327 Test: blockdev reset ...passed 00:07:56.327 Test: blockdev write read 8 blocks ...passed 00:07:56.327 Test: blockdev write read size > 128k ...passed 00:07:56.327 Test: blockdev write read invalid size ...passed 00:07:56.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.327 Test: blockdev write read max offset ...passed 00:07:56.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.327 Test: blockdev writev readv 8 blocks ...passed 00:07:56.327 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.327 Test: blockdev writev readv block ...passed 00:07:56.327 Test: blockdev writev readv size > 128k ...passed 00:07:56.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.327 Test: blockdev comparev and writev ...passed 00:07:56.327 Test: blockdev nvme passthru rw ...passed 00:07:56.327 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.327 Test: blockdev nvme admin passthru ...passed 00:07:56.327 Test: blockdev copy ...passed 00:07:56.327 Suite: bdevio tests on: Malloc2p3 00:07:56.327 Test: blockdev write read block ...passed 00:07:56.327 Test: blockdev write zeroes read block ...passed 00:07:56.327 Test: blockdev write zeroes read no split ...passed 00:07:56.327 Test: blockdev write zeroes read split ...passed 00:07:56.327 Test: blockdev write zeroes read split partial ...passed 00:07:56.327 Test: blockdev reset ...passed 00:07:56.327 Test: blockdev write read 8 blocks ...passed 00:07:56.327 Test: blockdev write read size > 128k ...passed 00:07:56.327 Test: blockdev write read invalid size ...passed 00:07:56.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.327 Test: blockdev write read max offset ...passed 00:07:56.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.327 Test: blockdev writev readv 8 blocks ...passed 00:07:56.327 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.327 Test: blockdev writev readv block ...passed 00:07:56.327 Test: blockdev writev readv size > 128k ...passed 00:07:56.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.327 Test: blockdev comparev and writev ...passed 00:07:56.327 Test: blockdev nvme passthru rw ...passed 00:07:56.327 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.327 Test: blockdev nvme admin passthru ...passed 00:07:56.327 Test: blockdev copy ...passed 00:07:56.327 Suite: bdevio tests on: Malloc2p2 00:07:56.327 Test: blockdev write read block ...passed 00:07:56.327 Test: blockdev write zeroes read block ...passed 00:07:56.327 Test: blockdev write zeroes read no split ...passed 00:07:56.327 Test: blockdev write zeroes read split ...passed 00:07:56.327 Test: blockdev write zeroes read split partial ...passed 00:07:56.327 Test: blockdev reset ...passed 00:07:56.327 Test: blockdev write read 8 blocks ...passed 00:07:56.327 Test: blockdev write read size > 128k ...passed 00:07:56.327 Test: blockdev write read invalid size ...passed 00:07:56.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.327 Test: blockdev write read max offset ...passed 00:07:56.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.327 Test: blockdev writev readv 8 blocks ...passed 00:07:56.327 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.327 Test: blockdev writev readv block ...passed 00:07:56.327 Test: blockdev writev readv size > 128k ...passed 00:07:56.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.327 Test: blockdev comparev and writev ...passed 00:07:56.327 Test: blockdev nvme passthru rw ...passed 00:07:56.327 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.327 Test: blockdev nvme admin passthru ...passed 00:07:56.327 Test: blockdev copy ...passed 00:07:56.327 Suite: bdevio tests on: Malloc2p1 00:07:56.327 Test: blockdev write read block ...passed 00:07:56.327 Test: blockdev write zeroes read block ...passed 00:07:56.327 Test: blockdev write zeroes read no split ...passed 00:07:56.327 Test: blockdev write zeroes read split ...passed 00:07:56.327 Test: blockdev write zeroes read split partial ...passed 00:07:56.327 Test: blockdev reset ...passed 00:07:56.327 Test: blockdev write read 8 blocks ...passed 00:07:56.327 Test: blockdev write read size > 128k ...passed 00:07:56.327 Test: blockdev write read invalid size ...passed 00:07:56.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.327 Test: blockdev write read max offset ...passed 00:07:56.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.327 Test: blockdev writev readv 8 blocks ...passed 00:07:56.327 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.327 Test: blockdev writev readv block ...passed 00:07:56.327 Test: blockdev writev readv size > 128k ...passed 00:07:56.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.327 Test: blockdev comparev and writev ...passed 00:07:56.327 Test: blockdev nvme passthru rw ...passed 00:07:56.327 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.327 Test: blockdev nvme admin passthru ...passed 00:07:56.327 Test: blockdev copy ...passed 00:07:56.327 Suite: bdevio tests on: Malloc2p0 00:07:56.327 Test: blockdev write read block ...passed 00:07:56.327 Test: blockdev write zeroes read block ...passed 00:07:56.327 Test: blockdev write zeroes read no split ...passed 00:07:56.327 Test: blockdev write zeroes read split ...passed 00:07:56.327 Test: blockdev write zeroes read split partial ...passed 00:07:56.327 Test: blockdev reset ...passed 00:07:56.327 Test: blockdev write read 8 blocks ...passed 00:07:56.327 Test: blockdev write read size > 128k ...passed 00:07:56.327 Test: blockdev write read invalid size ...passed 00:07:56.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.327 Test: blockdev write read max offset ...passed 00:07:56.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.327 Test: blockdev writev readv 8 blocks ...passed 00:07:56.327 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.327 Test: blockdev writev readv block ...passed 00:07:56.327 Test: blockdev writev readv size > 128k ...passed 00:07:56.327 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.327 Test: blockdev comparev and writev ...passed 00:07:56.327 Test: blockdev nvme passthru rw ...passed 00:07:56.327 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.327 Test: blockdev nvme admin passthru ...passed 00:07:56.327 Test: blockdev copy ...passed 00:07:56.327 Suite: bdevio tests on: Malloc1p1 00:07:56.327 Test: blockdev write read block ...passed 00:07:56.327 Test: blockdev write zeroes read block ...passed 00:07:56.327 Test: blockdev write zeroes read no split ...passed 00:07:56.327 Test: blockdev write zeroes read split ...passed 00:07:56.327 Test: blockdev write zeroes read split partial ...passed 00:07:56.327 Test: blockdev reset ...passed 00:07:56.327 Test: blockdev write read 8 blocks ...passed 00:07:56.327 Test: blockdev write read size > 128k ...passed 00:07:56.327 Test: blockdev write read invalid size ...passed 00:07:56.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.327 Test: blockdev write read max offset ...passed 00:07:56.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.327 Test: blockdev writev readv 8 blocks ...passed 00:07:56.327 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.327 Test: blockdev writev readv block ...passed 00:07:56.327 Test: blockdev writev readv size > 128k ...passed 00:07:56.328 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.328 Test: blockdev comparev and writev ...passed 00:07:56.328 Test: blockdev nvme passthru rw ...passed 00:07:56.328 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.328 Test: blockdev nvme admin passthru ...passed 00:07:56.328 Test: blockdev copy ...passed 00:07:56.328 Suite: bdevio tests on: Malloc1p0 00:07:56.328 Test: blockdev write read block ...passed 00:07:56.328 Test: blockdev write zeroes read block ...passed 00:07:56.328 Test: blockdev write zeroes read no split ...passed 00:07:56.328 Test: blockdev write zeroes read split ...passed 00:07:56.328 Test: blockdev write zeroes read split partial ...passed 00:07:56.328 Test: blockdev reset ...passed 00:07:56.328 Test: blockdev write read 8 blocks ...passed 00:07:56.328 Test: blockdev write read size > 128k ...passed 00:07:56.328 Test: blockdev write read invalid size ...passed 00:07:56.328 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.328 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.328 Test: blockdev write read max offset ...passed 00:07:56.328 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.328 Test: blockdev writev readv 8 blocks ...passed 00:07:56.328 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.328 Test: blockdev writev readv block ...passed 00:07:56.328 Test: blockdev writev readv size > 128k ...passed 00:07:56.328 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.328 Test: blockdev comparev and writev ...passed 00:07:56.328 Test: blockdev nvme passthru rw ...passed 00:07:56.328 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.328 Test: blockdev nvme admin passthru ...passed 00:07:56.328 Test: blockdev copy ...passed 00:07:56.328 Suite: bdevio tests on: Malloc0 00:07:56.328 Test: blockdev write read block ...passed 00:07:56.328 Test: blockdev write zeroes read block ...passed 00:07:56.328 Test: blockdev write zeroes read no split ...passed 00:07:56.328 Test: blockdev write zeroes read split ...passed 00:07:56.328 Test: blockdev write zeroes read split partial ...passed 00:07:56.328 Test: blockdev reset ...passed 00:07:56.328 Test: blockdev write read 8 blocks ...passed 00:07:56.328 Test: blockdev write read size > 128k ...passed 00:07:56.328 Test: blockdev write read invalid size ...passed 00:07:56.328 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:56.328 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:56.328 Test: blockdev write read max offset ...passed 00:07:56.328 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:56.328 Test: blockdev writev readv 8 blocks ...passed 00:07:56.328 Test: blockdev writev readv 30 x 1block ...passed 00:07:56.328 Test: blockdev writev readv block ...passed 00:07:56.328 Test: blockdev writev readv size > 128k ...passed 00:07:56.328 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:56.328 Test: blockdev comparev and writev ...passed 00:07:56.328 Test: blockdev nvme passthru rw ...passed 00:07:56.328 Test: blockdev nvme passthru vendor specific ...passed 00:07:56.328 Test: blockdev nvme admin passthru ...passed 00:07:56.328 Test: blockdev copy ...passed 00:07:56.328 00:07:56.328 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.328 suites 16 16 n/a 0 0 00:07:56.328 tests 368 368 368 0 0 00:07:56.328 asserts 2224 2224 2224 0 n/a 00:07:56.328 00:07:56.328 Elapsed time = 0.460 seconds 00:07:56.328 0 00:07:56.328 18:11:04 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 2128636 00:07:56.328 18:11:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 2128636 ']' 00:07:56.328 18:11:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 2128636 00:07:56.328 18:11:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:56.586 18:11:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.586 18:11:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2128636 00:07:56.586 18:11:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.586 18:11:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.586 18:11:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2128636' 00:07:56.586 killing process with pid 2128636 00:07:56.586 18:11:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@969 -- # kill 2128636 00:07:56.586 18:11:04 blockdev_general.bdev_bounds -- common/autotest_common.sh@974 -- # wait 2128636 00:07:56.845 18:11:05 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:56.845 00:07:56.845 real 0m1.455s 00:07:56.845 user 0m3.650s 00:07:56.845 sys 0m0.407s 00:07:56.845 18:11:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.845 18:11:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:56.845 ************************************ 00:07:56.845 END TEST bdev_bounds 00:07:56.845 ************************************ 00:07:56.845 18:11:05 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:56.845 18:11:05 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:56.845 18:11:05 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.845 18:11:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:56.845 ************************************ 00:07:56.845 START TEST bdev_nbd 00:07:56.845 ************************************ 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=2128936 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 2128936 /var/tmp/spdk-nbd.sock 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 2128936 ']' 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:56.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.846 18:11:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:56.846 [2024-07-24 18:11:05.342780] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:07:56.846 [2024-07-24 18:11:05.342824] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:01.0 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:01.1 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:01.2 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:01.3 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:01.4 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:01.5 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:01.6 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:01.7 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:02.0 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:02.1 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:02.2 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:02.3 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:02.4 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:02.5 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:02.6 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b3:02.7 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:01.0 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:01.1 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:01.2 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:01.3 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:01.4 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:01.5 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:01.6 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:01.7 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:02.0 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:02.1 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:02.2 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:02.3 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:02.4 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:02.5 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:02.6 cannot be used 00:07:56.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:56.846 EAL: Requested device 0000:b5:02.7 cannot be used 00:07:56.846 [2024-07-24 18:11:05.435215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.105 [2024-07-24 18:11:05.505086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.105 [2024-07-24 18:11:05.640807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:57.105 [2024-07-24 18:11:05.640856] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:57.105 [2024-07-24 18:11:05.640865] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:57.105 [2024-07-24 18:11:05.648816] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:57.105 [2024-07-24 18:11:05.648835] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:57.105 [2024-07-24 18:11:05.656830] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:57.105 [2024-07-24 18:11:05.656848] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:57.363 [2024-07-24 18:11:05.724777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:57.363 [2024-07-24 18:11:05.724818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.363 [2024-07-24 18:11:05.724829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xff46f0 00:07:57.363 [2024-07-24 18:11:05.724837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.363 [2024-07-24 18:11:05.725893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.363 [2024-07-24 18:11:05.725917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:57.622 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.881 1+0 records in 00:07:57.881 1+0 records out 00:07:57.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275816 s, 14.9 MB/s 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:57.881 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.141 1+0 records in 00:07:58.141 1+0 records out 00:07:58.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256633 s, 16.0 MB/s 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:58.141 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.400 1+0 records in 00:07:58.400 1+0 records out 00:07:58.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189082 s, 21.7 MB/s 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.400 1+0 records in 00:07:58.400 1+0 records out 00:07:58.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030285 s, 13.5 MB/s 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:58.400 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.659 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:58.659 18:11:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.659 1+0 records in 00:07:58.659 1+0 records out 00:07:58.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335798 s, 12.2 MB/s 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:58.659 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.918 1+0 records in 00:07:58.918 1+0 records out 00:07:58.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353809 s, 11.6 MB/s 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:58.918 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.176 1+0 records in 00:07:59.176 1+0 records out 00:07:59.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367065 s, 11.2 MB/s 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.176 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.177 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.177 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.177 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.177 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.177 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:59.177 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.436 1+0 records in 00:07:59.436 1+0 records out 00:07:59.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387207 s, 10.6 MB/s 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:59.436 18:11:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.695 1+0 records in 00:07:59.695 1+0 records out 00:07:59.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332431 s, 12.3 MB/s 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.695 1+0 records in 00:07:59.695 1+0 records out 00:07:59.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413324 s, 9.9 MB/s 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:59.695 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.954 1+0 records in 00:07:59.954 1+0 records out 00:07:59.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480815 s, 8.5 MB/s 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:59.954 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.213 1+0 records in 00:08:00.213 1+0 records out 00:08:00.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421555 s, 9.7 MB/s 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:00.213 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.472 1+0 records in 00:08:00.472 1+0 records out 00:08:00.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546652 s, 7.5 MB/s 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:00.472 18:11:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.731 1+0 records in 00:08:00.731 1+0 records out 00:08:00.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470105 s, 8.7 MB/s 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.731 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.731 1+0 records in 00:08:00.731 1+0 records out 00:08:00.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473077 s, 8.7 MB/s 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.990 1+0 records in 00:08:00.990 1+0 records out 00:08:00.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498714 s, 8.2 MB/s 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:08:00.990 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.249 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd0", 00:08:01.249 "bdev_name": "Malloc0" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd1", 00:08:01.249 "bdev_name": "Malloc1p0" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd2", 00:08:01.249 "bdev_name": "Malloc1p1" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd3", 00:08:01.249 "bdev_name": "Malloc2p0" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd4", 00:08:01.249 "bdev_name": "Malloc2p1" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd5", 00:08:01.249 "bdev_name": "Malloc2p2" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd6", 00:08:01.249 "bdev_name": "Malloc2p3" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd7", 00:08:01.249 "bdev_name": "Malloc2p4" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd8", 00:08:01.249 "bdev_name": "Malloc2p5" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd9", 00:08:01.249 "bdev_name": "Malloc2p6" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd10", 00:08:01.249 "bdev_name": "Malloc2p7" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd11", 00:08:01.249 "bdev_name": "TestPT" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd12", 00:08:01.249 "bdev_name": "raid0" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd13", 00:08:01.249 "bdev_name": "concat0" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd14", 00:08:01.249 "bdev_name": "raid1" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd15", 00:08:01.249 "bdev_name": "AIO0" 00:08:01.249 } 00:08:01.249 ]' 00:08:01.249 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:01.249 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd0", 00:08:01.249 "bdev_name": "Malloc0" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd1", 00:08:01.249 "bdev_name": "Malloc1p0" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd2", 00:08:01.249 "bdev_name": "Malloc1p1" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd3", 00:08:01.249 "bdev_name": "Malloc2p0" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd4", 00:08:01.249 "bdev_name": "Malloc2p1" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd5", 00:08:01.249 "bdev_name": "Malloc2p2" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd6", 00:08:01.249 "bdev_name": "Malloc2p3" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd7", 00:08:01.249 "bdev_name": "Malloc2p4" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd8", 00:08:01.249 "bdev_name": "Malloc2p5" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd9", 00:08:01.249 "bdev_name": "Malloc2p6" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd10", 00:08:01.249 "bdev_name": "Malloc2p7" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd11", 00:08:01.249 "bdev_name": "TestPT" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd12", 00:08:01.249 "bdev_name": "raid0" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd13", 00:08:01.249 "bdev_name": "concat0" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd14", 00:08:01.249 "bdev_name": "raid1" 00:08:01.249 }, 00:08:01.249 { 00:08:01.249 "nbd_device": "/dev/nbd15", 00:08:01.249 "bdev_name": "AIO0" 00:08:01.249 } 00:08:01.249 ]' 00:08:01.249 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:01.249 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:08:01.249 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.250 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:08:01.250 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:01.250 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:01.250 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.250 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.508 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.508 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.508 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.508 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.508 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.508 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.508 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.508 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.508 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.508 18:11:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.767 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:02.024 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:02.024 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:02.024 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:02.024 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.024 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.024 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:02.024 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.024 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.024 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.024 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:02.283 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:02.283 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:02.283 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:02.283 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.283 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.283 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:02.283 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.283 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.284 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.284 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:02.284 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:02.284 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:02.284 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:02.284 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.284 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.284 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:02.284 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.284 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.284 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.284 18:11:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:02.542 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:02.542 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:02.542 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:02.542 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.542 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.542 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:02.542 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.542 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.542 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.542 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:08:02.800 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:08:02.800 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:08:02.800 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:08:02.800 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.800 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.800 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:08:02.800 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.800 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.800 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.800 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.059 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:03.317 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:03.317 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:03.317 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:03.317 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.317 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.317 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:03.317 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.317 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.317 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.317 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:03.575 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:03.575 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:03.575 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:03.575 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.575 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.575 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:03.575 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.575 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.575 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.575 18:11:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:03.575 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:03.576 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:03.576 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:03.576 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.576 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.576 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.833 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:04.091 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:04.091 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:04.091 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:04.091 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.091 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.091 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:04.091 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.091 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.091 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.092 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:04.350 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:04.609 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:04.609 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:04.609 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:04.609 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:04.609 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:04.610 18:11:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:04.610 /dev/nbd0 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.610 1+0 records in 00:08:04.610 1+0 records out 00:08:04.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147058 s, 27.9 MB/s 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:04.610 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:08:04.869 /dev/nbd1 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.869 1+0 records in 00:08:04.869 1+0 records out 00:08:04.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258561 s, 15.8 MB/s 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:04.869 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:08:05.128 /dev/nbd10 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.128 1+0 records in 00:08:05.128 1+0 records out 00:08:05.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299899 s, 13.7 MB/s 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:05.128 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:08:05.386 /dev/nbd11 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.386 1+0 records in 00:08:05.386 1+0 records out 00:08:05.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217952 s, 18.8 MB/s 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:05.386 18:11:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:08:05.645 /dev/nbd12 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.646 1+0 records in 00:08:05.646 1+0 records out 00:08:05.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240649 s, 17.0 MB/s 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:05.646 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:08:05.646 /dev/nbd13 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.904 1+0 records in 00:08:05.904 1+0 records out 00:08:05.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340162 s, 12.0 MB/s 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:08:05.904 /dev/nbd14 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.904 1+0 records in 00:08:05.904 1+0 records out 00:08:05.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033245 s, 12.3 MB/s 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:05.904 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:08:06.162 /dev/nbd15 00:08:06.162 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:08:06.162 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:08:06.162 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:08:06.162 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.162 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.162 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.162 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:08:06.162 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.162 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.162 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.162 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.162 1+0 records in 00:08:06.162 1+0 records out 00:08:06.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331608 s, 12.4 MB/s 00:08:06.162 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.163 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.163 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.163 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.163 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.163 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.163 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:06.163 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:08:06.420 /dev/nbd2 00:08:06.420 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:08:06.420 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:08:06.420 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:06.420 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.420 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.420 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.421 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:06.421 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.421 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.421 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.421 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.421 1+0 records in 00:08:06.421 1+0 records out 00:08:06.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310918 s, 13.2 MB/s 00:08:06.421 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.421 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.421 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.421 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.421 18:11:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.421 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.421 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:06.421 18:11:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:08:06.678 /dev/nbd3 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.678 1+0 records in 00:08:06.678 1+0 records out 00:08:06.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481882 s, 8.5 MB/s 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:06.678 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:08:06.937 /dev/nbd4 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.937 1+0 records in 00:08:06.937 1+0 records out 00:08:06.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357376 s, 11.5 MB/s 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:06.937 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:08:06.937 /dev/nbd5 00:08:07.195 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:08:07.195 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:08:07.195 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:07.195 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:07.195 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.196 1+0 records in 00:08:07.196 1+0 records out 00:08:07.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490358 s, 8.4 MB/s 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:08:07.196 /dev/nbd6 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.196 1+0 records in 00:08:07.196 1+0 records out 00:08:07.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554142 s, 7.4 MB/s 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:07.196 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:08:07.454 /dev/nbd7 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.454 1+0 records in 00:08:07.454 1+0 records out 00:08:07.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357974 s, 11.4 MB/s 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:07.454 18:11:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:08:07.712 /dev/nbd8 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.712 1+0 records in 00:08:07.712 1+0 records out 00:08:07.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565429 s, 7.2 MB/s 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:07.712 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:08:07.971 /dev/nbd9 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.971 1+0 records in 00:08:07.971 1+0 records out 00:08:07.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572503 s, 7.2 MB/s 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.971 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.231 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd0", 00:08:08.231 "bdev_name": "Malloc0" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd1", 00:08:08.231 "bdev_name": "Malloc1p0" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd10", 00:08:08.231 "bdev_name": "Malloc1p1" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd11", 00:08:08.231 "bdev_name": "Malloc2p0" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd12", 00:08:08.231 "bdev_name": "Malloc2p1" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd13", 00:08:08.231 "bdev_name": "Malloc2p2" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd14", 00:08:08.231 "bdev_name": "Malloc2p3" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd15", 00:08:08.231 "bdev_name": "Malloc2p4" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd2", 00:08:08.231 "bdev_name": "Malloc2p5" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd3", 00:08:08.231 "bdev_name": "Malloc2p6" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd4", 00:08:08.231 "bdev_name": "Malloc2p7" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd5", 00:08:08.231 "bdev_name": "TestPT" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd6", 00:08:08.231 "bdev_name": "raid0" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd7", 00:08:08.231 "bdev_name": "concat0" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd8", 00:08:08.231 "bdev_name": "raid1" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd9", 00:08:08.231 "bdev_name": "AIO0" 00:08:08.231 } 00:08:08.231 ]' 00:08:08.231 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd0", 00:08:08.231 "bdev_name": "Malloc0" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd1", 00:08:08.231 "bdev_name": "Malloc1p0" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd10", 00:08:08.231 "bdev_name": "Malloc1p1" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd11", 00:08:08.231 "bdev_name": "Malloc2p0" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd12", 00:08:08.231 "bdev_name": "Malloc2p1" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd13", 00:08:08.231 "bdev_name": "Malloc2p2" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd14", 00:08:08.231 "bdev_name": "Malloc2p3" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd15", 00:08:08.231 "bdev_name": "Malloc2p4" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd2", 00:08:08.231 "bdev_name": "Malloc2p5" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd3", 00:08:08.231 "bdev_name": "Malloc2p6" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd4", 00:08:08.231 "bdev_name": "Malloc2p7" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd5", 00:08:08.231 "bdev_name": "TestPT" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd6", 00:08:08.231 "bdev_name": "raid0" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd7", 00:08:08.231 "bdev_name": "concat0" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd8", 00:08:08.231 "bdev_name": "raid1" 00:08:08.231 }, 00:08:08.231 { 00:08:08.231 "nbd_device": "/dev/nbd9", 00:08:08.231 "bdev_name": "AIO0" 00:08:08.231 } 00:08:08.231 ]' 00:08:08.231 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.231 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:08.231 /dev/nbd1 00:08:08.231 /dev/nbd10 00:08:08.231 /dev/nbd11 00:08:08.231 /dev/nbd12 00:08:08.231 /dev/nbd13 00:08:08.231 /dev/nbd14 00:08:08.231 /dev/nbd15 00:08:08.231 /dev/nbd2 00:08:08.231 /dev/nbd3 00:08:08.231 /dev/nbd4 00:08:08.231 /dev/nbd5 00:08:08.231 /dev/nbd6 00:08:08.231 /dev/nbd7 00:08:08.231 /dev/nbd8 00:08:08.231 /dev/nbd9' 00:08:08.231 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:08.231 /dev/nbd1 00:08:08.231 /dev/nbd10 00:08:08.231 /dev/nbd11 00:08:08.231 /dev/nbd12 00:08:08.231 /dev/nbd13 00:08:08.231 /dev/nbd14 00:08:08.231 /dev/nbd15 00:08:08.231 /dev/nbd2 00:08:08.231 /dev/nbd3 00:08:08.231 /dev/nbd4 00:08:08.231 /dev/nbd5 00:08:08.231 /dev/nbd6 00:08:08.231 /dev/nbd7 00:08:08.231 /dev/nbd8 00:08:08.231 /dev/nbd9' 00:08:08.231 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.231 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:08.232 256+0 records in 00:08:08.232 256+0 records out 00:08:08.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109054 s, 96.2 MB/s 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:08.232 256+0 records in 00:08:08.232 256+0 records out 00:08:08.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112847 s, 9.3 MB/s 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.232 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:08.491 256+0 records in 00:08:08.491 256+0 records out 00:08:08.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116489 s, 9.0 MB/s 00:08:08.491 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.491 18:11:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:08.491 256+0 records in 00:08:08.491 256+0 records out 00:08:08.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117947 s, 8.9 MB/s 00:08:08.491 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.491 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:08.750 256+0 records in 00:08:08.750 256+0 records out 00:08:08.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119074 s, 8.8 MB/s 00:08:08.750 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.750 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:08.750 256+0 records in 00:08:08.750 256+0 records out 00:08:08.750 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118676 s, 8.8 MB/s 00:08:08.750 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.750 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:09.066 256+0 records in 00:08:09.066 256+0 records out 00:08:09.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115639 s, 9.1 MB/s 00:08:09.066 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.066 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:09.066 256+0 records in 00:08:09.066 256+0 records out 00:08:09.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115366 s, 9.1 MB/s 00:08:09.066 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.066 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:08:09.356 256+0 records in 00:08:09.356 256+0 records out 00:08:09.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116979 s, 9.0 MB/s 00:08:09.356 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.356 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:08:09.356 256+0 records in 00:08:09.356 256+0 records out 00:08:09.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116921 s, 9.0 MB/s 00:08:09.356 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.356 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:08:09.356 256+0 records in 00:08:09.356 256+0 records out 00:08:09.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114722 s, 9.1 MB/s 00:08:09.356 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.356 18:11:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:08:09.614 256+0 records in 00:08:09.614 256+0 records out 00:08:09.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115423 s, 9.1 MB/s 00:08:09.614 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.614 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:08:09.614 256+0 records in 00:08:09.614 256+0 records out 00:08:09.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116264 s, 9.0 MB/s 00:08:09.614 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.614 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:08:09.873 256+0 records in 00:08:09.873 256+0 records out 00:08:09.873 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115778 s, 9.1 MB/s 00:08:09.873 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.873 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:08:09.873 256+0 records in 00:08:09.873 256+0 records out 00:08:09.873 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119972 s, 8.7 MB/s 00:08:09.873 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.873 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:08:10.131 256+0 records in 00:08:10.131 256+0 records out 00:08:10.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120959 s, 8.7 MB/s 00:08:10.131 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.131 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:08:10.131 256+0 records in 00:08:10.131 256+0 records out 00:08:10.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117451 s, 8.9 MB/s 00:08:10.131 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:08:10.131 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:10.131 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.131 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.132 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.390 18:11:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:10.649 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:10.649 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:10.649 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:10.649 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.649 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.649 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:10.649 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.649 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.649 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.649 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:10.907 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:10.907 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:10.907 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:10.907 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.907 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.907 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:10.907 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.907 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.907 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.907 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:11.165 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:11.165 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:11.165 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:11.165 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.165 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.166 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:11.424 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:11.424 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:11.424 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:11.424 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.424 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.424 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:11.424 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.424 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.424 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.424 18:11:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:11.683 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:11.683 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:11.683 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:11.683 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.683 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.683 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:11.683 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.683 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.683 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.683 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:08:11.941 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:08:11.941 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:08:11.941 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:08:11.941 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.941 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.941 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:08:11.941 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.941 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.941 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.941 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.200 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:12.458 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:12.458 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:12.458 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:12.458 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.458 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.458 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:12.458 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.458 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.458 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.458 18:11:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:12.717 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:12.717 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:12.717 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:12.717 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.717 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.717 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:12.717 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.717 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.717 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.717 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:12.975 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.976 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:08:13.234 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:08:13.234 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:08:13.234 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:08:13.234 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.234 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.234 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:08:13.234 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.234 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.234 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.234 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:08:13.492 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:08:13.492 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:08:13.492 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:08:13.492 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.492 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.492 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:08:13.492 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.492 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.492 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.492 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.492 18:11:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:13.750 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:14.009 malloc_lvol_verify 00:08:14.009 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:14.009 5a174447-3f22-4688-996f-887ab10b5c46 00:08:14.009 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:14.267 8de98d95-5934-465f-9757-7f044bd2a945 00:08:14.267 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:14.526 /dev/nbd0 00:08:14.526 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:14.526 mke2fs 1.46.5 (30-Dec-2021) 00:08:14.526 Discarding device blocks: 0/4096 done 00:08:14.526 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:14.526 00:08:14.526 Allocating group tables: 0/1 done 00:08:14.526 Writing inode tables: 0/1 done 00:08:14.526 Creating journal (1024 blocks): done 00:08:14.526 Writing superblocks and filesystem accounting information: 0/1 done 00:08:14.526 00:08:14.526 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:14.526 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:14.526 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.526 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:14.526 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:14.526 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:14.526 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.526 18:11:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 2128936 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 2128936 ']' 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 2128936 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.526 18:11:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2128936 00:08:14.785 18:11:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:14.785 18:11:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:14.785 18:11:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2128936' 00:08:14.785 killing process with pid 2128936 00:08:14.785 18:11:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@969 -- # kill 2128936 00:08:14.785 18:11:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@974 -- # wait 2128936 00:08:15.043 18:11:23 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:15.043 00:08:15.043 real 0m18.260s 00:08:15.043 user 0m21.891s 00:08:15.043 sys 0m10.838s 00:08:15.043 18:11:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.043 18:11:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:15.043 ************************************ 00:08:15.043 END TEST bdev_nbd 00:08:15.043 ************************************ 00:08:15.043 18:11:23 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:15.043 18:11:23 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:08:15.043 18:11:23 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:08:15.043 18:11:23 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:08:15.043 18:11:23 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:15.043 18:11:23 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.043 18:11:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:15.043 ************************************ 00:08:15.043 START TEST bdev_fio 00:08:15.043 ************************************ 00:08:15.043 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:08:15.043 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:08:15.043 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:08:15.043 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:15.043 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:08:15.043 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:08:15.043 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.302 18:11:23 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:15.302 ************************************ 00:08:15.302 START TEST bdev_fio_rw_verify 00:08:15.302 ************************************ 00:08:15.302 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:15.302 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:15.302 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:15.302 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:15.302 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:15.302 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:15.302 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:15.303 18:11:23 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:15.560 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:15.560 fio-3.35 00:08:15.560 Starting 16 threads 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:01.0 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:01.1 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:01.2 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:01.3 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:01.4 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:01.5 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:01.6 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:01.7 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:02.0 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:02.1 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:02.2 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:02.3 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:02.4 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:02.5 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:02.6 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b3:02.7 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:01.0 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:01.1 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:01.2 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:01.3 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:01.4 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:01.5 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:01.6 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:01.7 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:02.0 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:02.1 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:02.2 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:02.3 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:02.4 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:02.5 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:02.6 cannot be used 00:08:15.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.819 EAL: Requested device 0000:b5:02.7 cannot be used 00:08:28.031 00:08:28.031 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=2133081: Wed Jul 24 18:11:34 2024 00:08:28.032 read: IOPS=111k, BW=435MiB/s (456MB/s)(4347MiB/10001msec) 00:08:28.032 slat (nsec): min=1907, max=969658, avg=29658.14, stdev=13015.93 00:08:28.032 clat (usec): min=8, max=1521, avg=244.74, stdev=119.37 00:08:28.032 lat (usec): min=16, max=1559, avg=274.39, stdev=125.94 00:08:28.032 clat percentiles (usec): 00:08:28.032 | 50.000th=[ 237], 99.000th=[ 523], 99.900th=[ 619], 99.990th=[ 775], 00:08:28.032 | 99.999th=[ 1188] 00:08:28.032 write: IOPS=174k, BW=679MiB/s (712MB/s)(6708MiB/9874msec); 0 zone resets 00:08:28.032 slat (usec): min=4, max=1281, avg=38.90, stdev=12.82 00:08:28.032 clat (usec): min=8, max=3702, avg=280.94, stdev=131.28 00:08:28.032 lat (usec): min=27, max=3746, avg=319.84, stdev=137.62 00:08:28.032 clat percentiles (usec): 00:08:28.032 | 50.000th=[ 269], 99.000th=[ 611], 99.900th=[ 791], 99.990th=[ 1012], 00:08:28.032 | 99.999th=[ 1680] 00:08:28.032 bw ( KiB/s): min=569584, max=969818, per=99.10%, avg=689425.37, stdev=6398.83, samples=304 00:08:28.032 iops : min=142396, max=242450, avg=172356.11, stdev=1599.66, samples=304 00:08:28.032 lat (usec) : 10=0.01%, 20=0.04%, 50=1.31%, 100=7.54%, 250=39.79% 00:08:28.032 lat (usec) : 500=47.78%, 750=3.40%, 1000=0.12% 00:08:28.032 lat (msec) : 2=0.01%, 4=0.01% 00:08:28.032 cpu : usr=99.28%, sys=0.38%, ctx=657, majf=0, minf=2641 00:08:28.032 IO depths : 1=12.4%, 2=24.7%, 4=50.3%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:28.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.032 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.032 issued rwts: total=1112772,1717316,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.032 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:28.032 00:08:28.032 Run status group 0 (all jobs): 00:08:28.032 READ: bw=435MiB/s (456MB/s), 435MiB/s-435MiB/s (456MB/s-456MB/s), io=4347MiB (4558MB), run=10001-10001msec 00:08:28.032 WRITE: bw=679MiB/s (712MB/s), 679MiB/s-679MiB/s (712MB/s-712MB/s), io=6708MiB (7034MB), run=9874-9874msec 00:08:28.032 00:08:28.032 real 0m11.365s 00:08:28.032 user 2m49.190s 00:08:28.032 sys 0m1.639s 00:08:28.032 18:11:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.032 18:11:35 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:08:28.032 ************************************ 00:08:28.032 END TEST bdev_fio_rw_verify 00:08:28.032 ************************************ 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:08:28.032 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:28.033 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "841b16e4-8c22-4705-9bd9-13cb5ca8a7c2"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "841b16e4-8c22-4705-9bd9-13cb5ca8a7c2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f51d279a-8a79-5c8e-900a-7d1c3d101218"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f51d279a-8a79-5c8e-900a-7d1c3d101218",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "14d613a5-703e-5370-b8e9-52f8e9e81ba7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "14d613a5-703e-5370-b8e9-52f8e9e81ba7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "af5ee2cf-779e-5209-b79f-02cec017a6da"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "af5ee2cf-779e-5209-b79f-02cec017a6da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "bb9b2d57-f01e-5b8a-8945-b4913fe37b43"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bb9b2d57-f01e-5b8a-8945-b4913fe37b43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "aaddff92-fc49-5bcd-bb85-3c02467ec197"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "aaddff92-fc49-5bcd-bb85-3c02467ec197",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ff9ab044-ec24-583e-9660-31db1f1f7d3e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ff9ab044-ec24-583e-9660-31db1f1f7d3e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "4d4b9c62-71cf-54d2-bd34-c290a3748ed8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4d4b9c62-71cf-54d2-bd34-c290a3748ed8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "a0fd5f14-7b77-5cd7-9604-61beeff0676a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a0fd5f14-7b77-5cd7-9604-61beeff0676a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "fd939ef1-8104-5a32-b700-cd0affb005fa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fd939ef1-8104-5a32-b700-cd0affb005fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "341cb4ea-f0e0-53b4-974f-5fb24c1dbaca"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "341cb4ea-f0e0-53b4-974f-5fb24c1dbaca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "9ac485a6-3d88-5221-8275-6883de61177f"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9ac485a6-3d88-5221-8275-6883de61177f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d936d288-9915-44c7-90e5-adfeb3040148"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d936d288-9915-44c7-90e5-adfeb3040148",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d936d288-9915-44c7-90e5-adfeb3040148",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "69ac6256-08ae-412f-8ae2-923217f9c5c0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "17ceece5-b5d7-49b6-8e16-db857d135312",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "83c5dee1-13a7-47c6-91e2-e253e3eb64f3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "83c5dee1-13a7-47c6-91e2-e253e3eb64f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "83c5dee1-13a7-47c6-91e2-e253e3eb64f3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "44b7af4f-70a8-4a08-8b98-3c246caa1399",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "5674e43b-8541-4b08-a04b-8917e5b165f4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "3e83c7cd-cc44-405c-bebf-e9ca971bccd4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3e83c7cd-cc44-405c-bebf-e9ca971bccd4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "3e83c7cd-cc44-405c-bebf-e9ca971bccd4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2f2099be-c4d0-49d9-b013-e10627efa4bd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "bb493049-2970-48d0-99bf-9413f8a6cb3f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "66df4caf-2d46-4c1e-b6a8-2bc21155ce37"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "66df4caf-2d46-4c1e-b6a8-2bc21155ce37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:28.033 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:08:28.033 Malloc1p0 00:08:28.033 Malloc1p1 00:08:28.033 Malloc2p0 00:08:28.033 Malloc2p1 00:08:28.033 Malloc2p2 00:08:28.033 Malloc2p3 00:08:28.033 Malloc2p4 00:08:28.033 Malloc2p5 00:08:28.033 Malloc2p6 00:08:28.033 Malloc2p7 00:08:28.033 TestPT 00:08:28.033 raid0 00:08:28.033 concat0 ]] 00:08:28.033 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "841b16e4-8c22-4705-9bd9-13cb5ca8a7c2"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "841b16e4-8c22-4705-9bd9-13cb5ca8a7c2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f51d279a-8a79-5c8e-900a-7d1c3d101218"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f51d279a-8a79-5c8e-900a-7d1c3d101218",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "14d613a5-703e-5370-b8e9-52f8e9e81ba7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "14d613a5-703e-5370-b8e9-52f8e9e81ba7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "af5ee2cf-779e-5209-b79f-02cec017a6da"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "af5ee2cf-779e-5209-b79f-02cec017a6da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "bb9b2d57-f01e-5b8a-8945-b4913fe37b43"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bb9b2d57-f01e-5b8a-8945-b4913fe37b43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "aaddff92-fc49-5bcd-bb85-3c02467ec197"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "aaddff92-fc49-5bcd-bb85-3c02467ec197",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ff9ab044-ec24-583e-9660-31db1f1f7d3e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ff9ab044-ec24-583e-9660-31db1f1f7d3e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "4d4b9c62-71cf-54d2-bd34-c290a3748ed8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4d4b9c62-71cf-54d2-bd34-c290a3748ed8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "a0fd5f14-7b77-5cd7-9604-61beeff0676a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a0fd5f14-7b77-5cd7-9604-61beeff0676a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "fd939ef1-8104-5a32-b700-cd0affb005fa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fd939ef1-8104-5a32-b700-cd0affb005fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "341cb4ea-f0e0-53b4-974f-5fb24c1dbaca"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "341cb4ea-f0e0-53b4-974f-5fb24c1dbaca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "9ac485a6-3d88-5221-8275-6883de61177f"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9ac485a6-3d88-5221-8275-6883de61177f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d936d288-9915-44c7-90e5-adfeb3040148"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d936d288-9915-44c7-90e5-adfeb3040148",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d936d288-9915-44c7-90e5-adfeb3040148",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "69ac6256-08ae-412f-8ae2-923217f9c5c0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "17ceece5-b5d7-49b6-8e16-db857d135312",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "83c5dee1-13a7-47c6-91e2-e253e3eb64f3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "83c5dee1-13a7-47c6-91e2-e253e3eb64f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "83c5dee1-13a7-47c6-91e2-e253e3eb64f3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "44b7af4f-70a8-4a08-8b98-3c246caa1399",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "5674e43b-8541-4b08-a04b-8917e5b165f4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "3e83c7cd-cc44-405c-bebf-e9ca971bccd4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3e83c7cd-cc44-405c-bebf-e9ca971bccd4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "3e83c7cd-cc44-405c-bebf-e9ca971bccd4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2f2099be-c4d0-49d9-b013-e10627efa4bd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "bb493049-2970-48d0-99bf-9413f8a6cb3f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "66df4caf-2d46-4c1e-b6a8-2bc21155ce37"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "66df4caf-2d46-4c1e-b6a8-2bc21155ce37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.035 18:11:35 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:28.035 ************************************ 00:08:28.035 START TEST bdev_fio_trim 00:08:28.035 ************************************ 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:28.035 18:11:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:28.035 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.035 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.035 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.035 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.035 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.035 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.035 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.036 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.036 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.036 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.036 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.036 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.036 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.036 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:28.036 fio-3.35 00:08:28.036 Starting 14 threads 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:01.0 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:01.1 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:01.2 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:01.3 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:01.4 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:01.5 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:01.6 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:01.7 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:02.0 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:02.1 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:02.2 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:02.3 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:02.4 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:02.5 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:02.6 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b3:02.7 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:01.0 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:01.1 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:01.2 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:01.3 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:01.4 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:01.5 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:01.6 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:01.7 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:02.0 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:02.1 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:02.2 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:02.3 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:02.4 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:02.5 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:02.6 cannot be used 00:08:28.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:28.036 EAL: Requested device 0000:b5:02.7 cannot be used 00:08:38.045 00:08:38.045 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=2135219: Wed Jul 24 18:11:46 2024 00:08:38.045 write: IOPS=161k, BW=628MiB/s (659MB/s)(6285MiB/10002msec); 0 zone resets 00:08:38.045 slat (nsec): min=1884, max=188285, avg=30849.17, stdev=8661.61 00:08:38.045 clat (usec): min=19, max=2967, avg=216.76, stdev=76.39 00:08:38.045 lat (usec): min=27, max=2990, avg=247.61, stdev=79.48 00:08:38.045 clat percentiles (usec): 00:08:38.045 | 50.000th=[ 210], 99.000th=[ 396], 99.900th=[ 498], 99.990th=[ 693], 00:08:38.045 | 99.999th=[ 971] 00:08:38.045 bw ( KiB/s): min=548864, max=924582, per=100.00%, avg=646087.05, stdev=7337.92, samples=266 00:08:38.045 iops : min=137214, max=231144, avg=161521.58, stdev=1834.47, samples=266 00:08:38.045 trim: IOPS=161k, BW=628MiB/s (659MB/s)(6285MiB/10002msec); 0 zone resets 00:08:38.045 slat (usec): min=3, max=630, avg=21.15, stdev= 5.91 00:08:38.045 clat (usec): min=3, max=2990, avg=246.62, stdev=79.38 00:08:38.045 lat (usec): min=9, max=3010, avg=267.78, stdev=81.87 00:08:38.045 clat percentiles (usec): 00:08:38.045 | 50.000th=[ 239], 99.000th=[ 429], 99.900th=[ 519], 99.990th=[ 652], 00:08:38.045 | 99.999th=[ 930] 00:08:38.045 bw ( KiB/s): min=548856, max=924582, per=100.00%, avg=646086.63, stdev=7337.94, samples=266 00:08:38.045 iops : min=137214, max=231144, avg=161521.68, stdev=1834.46, samples=266 00:08:38.045 lat (usec) : 4=0.01%, 10=0.01%, 20=0.03%, 50=0.11%, 100=2.46% 00:08:38.045 lat (usec) : 250=58.68%, 500=38.59%, 750=0.12%, 1000=0.01% 00:08:38.045 lat (msec) : 2=0.01%, 4=0.01% 00:08:38.045 cpu : usr=99.67%, sys=0.00%, ctx=482, majf=0, minf=1088 00:08:38.045 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:38.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.045 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.045 issued rwts: total=0,1608923,1608925,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.045 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:38.045 00:08:38.045 Run status group 0 (all jobs): 00:08:38.045 WRITE: bw=628MiB/s (659MB/s), 628MiB/s-628MiB/s (659MB/s-659MB/s), io=6285MiB (6590MB), run=10002-10002msec 00:08:38.045 TRIM: bw=628MiB/s (659MB/s), 628MiB/s-628MiB/s (659MB/s-659MB/s), io=6285MiB (6590MB), run=10002-10002msec 00:08:38.303 00:08:38.303 real 0m11.383s 00:08:38.303 user 2m30.160s 00:08:38.303 sys 0m0.952s 00:08:38.303 18:11:46 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.303 18:11:46 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:08:38.303 ************************************ 00:08:38.303 END TEST bdev_fio_trim 00:08:38.303 ************************************ 00:08:38.303 18:11:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:08:38.303 18:11:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:38.303 18:11:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:08:38.303 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:38.303 18:11:46 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:08:38.303 00:08:38.303 real 0m23.129s 00:08:38.303 user 5m19.545s 00:08:38.303 sys 0m2.808s 00:08:38.303 18:11:46 blockdev_general.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.303 18:11:46 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:38.303 ************************************ 00:08:38.303 END TEST bdev_fio 00:08:38.303 ************************************ 00:08:38.303 18:11:46 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:38.303 18:11:46 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:38.303 18:11:46 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:38.303 18:11:46 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.303 18:11:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:38.303 ************************************ 00:08:38.303 START TEST bdev_verify 00:08:38.303 ************************************ 00:08:38.303 18:11:46 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:38.303 [2024-07-24 18:11:46.890102] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:08:38.303 [2024-07-24 18:11:46.890151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2137036 ] 00:08:38.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:01.0 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:01.1 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:01.2 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:01.3 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:01.4 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:01.5 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:01.6 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:01.7 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:02.0 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:02.1 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:02.2 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:02.3 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:02.4 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:02.5 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:02.6 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b3:02.7 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:01.0 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:01.1 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:01.2 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:01.3 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:01.4 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:01.5 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:01.6 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:01.7 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:02.0 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:02.1 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:02.2 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:02.3 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:02.4 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:02.5 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:02.6 cannot be used 00:08:38.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.614 EAL: Requested device 0000:b5:02.7 cannot be used 00:08:38.614 [2024-07-24 18:11:46.980490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:38.614 [2024-07-24 18:11:47.051580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.614 [2024-07-24 18:11:47.051583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.880 [2024-07-24 18:11:47.186499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:38.880 [2024-07-24 18:11:47.186534] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:38.880 [2024-07-24 18:11:47.186544] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:38.880 [2024-07-24 18:11:47.194509] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:38.880 [2024-07-24 18:11:47.194530] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:38.880 [2024-07-24 18:11:47.202523] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:38.880 [2024-07-24 18:11:47.202541] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:38.880 [2024-07-24 18:11:47.271030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:38.880 [2024-07-24 18:11:47.271067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.880 [2024-07-24 18:11:47.271080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1206f40 00:08:38.880 [2024-07-24 18:11:47.271089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.880 [2024-07-24 18:11:47.272057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.880 [2024-07-24 18:11:47.272080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:39.139 Running I/O for 5 seconds... 00:08:44.415 00:08:44.415 Latency(us) 00:08:44.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.415 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x1000 00:08:44.415 Malloc0 : 5.11 1704.35 6.66 0.00 0.00 74978.79 365.36 288568.12 00:08:44.415 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x1000 length 0x1000 00:08:44.415 Malloc0 : 5.09 1685.78 6.59 0.00 0.00 75802.45 412.88 322122.55 00:08:44.415 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x800 00:08:44.415 Malloc1p0 : 5.11 877.00 3.43 0.00 0.00 145332.91 2634.55 163577.86 00:08:44.415 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x800 length 0x800 00:08:44.415 Malloc1p0 : 5.09 880.36 3.44 0.00 0.00 144791.23 2660.76 164416.72 00:08:44.415 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x800 00:08:44.415 Malloc1p1 : 5.11 876.76 3.42 0.00 0.00 145068.00 2647.65 161061.27 00:08:44.415 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x800 length 0x800 00:08:44.415 Malloc1p1 : 5.09 880.09 3.44 0.00 0.00 144530.34 2673.87 161900.13 00:08:44.415 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x200 00:08:44.415 Malloc2p0 : 5.11 876.52 3.42 0.00 0.00 144804.14 2778.73 158544.69 00:08:44.415 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x200 length 0x200 00:08:44.415 Malloc2p0 : 5.09 879.82 3.44 0.00 0.00 144263.27 2778.73 158544.69 00:08:44.415 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x200 00:08:44.415 Malloc2p1 : 5.11 876.28 3.42 0.00 0.00 144532.37 2582.12 154350.39 00:08:44.415 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x200 length 0x200 00:08:44.415 Malloc2p1 : 5.09 879.56 3.44 0.00 0.00 143989.09 2582.12 154350.39 00:08:44.415 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x200 00:08:44.415 Malloc2p2 : 5.11 876.04 3.42 0.00 0.00 144273.59 2791.83 150994.94 00:08:44.415 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x200 length 0x200 00:08:44.415 Malloc2p2 : 5.09 879.32 3.43 0.00 0.00 143730.15 2804.94 150994.94 00:08:44.415 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x200 00:08:44.415 Malloc2p3 : 5.12 875.80 3.42 0.00 0.00 144003.86 2647.65 147639.50 00:08:44.415 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x200 length 0x200 00:08:44.415 Malloc2p3 : 5.17 891.55 3.48 0.00 0.00 141478.59 2647.65 148478.36 00:08:44.415 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x200 00:08:44.415 Malloc2p4 : 5.12 875.56 3.42 0.00 0.00 143748.10 2660.76 145961.78 00:08:44.415 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x200 length 0x200 00:08:44.415 Malloc2p4 : 5.17 891.28 3.48 0.00 0.00 141228.36 2686.98 146800.64 00:08:44.415 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x200 00:08:44.415 Malloc2p5 : 5.18 890.22 3.48 0.00 0.00 141130.48 2660.76 144284.06 00:08:44.415 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x200 length 0x200 00:08:44.415 Malloc2p5 : 5.17 891.02 3.48 0.00 0.00 140993.63 2686.98 144284.06 00:08:44.415 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x200 00:08:44.415 Malloc2p6 : 5.18 889.96 3.48 0.00 0.00 140891.83 2621.44 141767.48 00:08:44.415 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x200 length 0x200 00:08:44.415 Malloc2p6 : 5.17 890.75 3.48 0.00 0.00 140753.23 2634.55 142606.34 00:08:44.415 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x200 00:08:44.415 Malloc2p7 : 5.18 889.59 3.47 0.00 0.00 140655.62 2660.76 140089.75 00:08:44.415 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x200 length 0x200 00:08:44.415 Malloc2p7 : 5.17 890.49 3.48 0.00 0.00 140493.92 2647.65 140089.75 00:08:44.415 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x1000 00:08:44.415 TestPT : 5.18 867.56 3.39 0.00 0.00 143525.53 13002.34 140089.75 00:08:44.415 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x1000 length 0x1000 00:08:44.415 TestPT : 5.19 865.91 3.38 0.00 0.00 143850.51 9961.47 187065.96 00:08:44.415 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x2000 00:08:44.415 raid0 : 5.18 888.95 3.47 0.00 0.00 139958.82 2634.55 120795.96 00:08:44.415 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x2000 length 0x2000 00:08:44.415 raid0 : 5.18 890.02 3.48 0.00 0.00 139768.09 2647.65 116601.65 00:08:44.415 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x2000 00:08:44.415 concat0 : 5.18 888.72 3.47 0.00 0.00 139692.70 2647.65 115762.79 00:08:44.415 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x2000 length 0x2000 00:08:44.415 concat0 : 5.18 889.66 3.48 0.00 0.00 139516.25 2660.76 111568.49 00:08:44.415 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x1000 00:08:44.415 raid1 : 5.19 888.48 3.47 0.00 0.00 139415.81 3132.62 109051.90 00:08:44.415 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x1000 length 0x1000 00:08:44.415 raid1 : 5.18 889.34 3.47 0.00 0.00 139259.56 3106.41 108213.04 00:08:44.415 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x0 length 0x4e2 00:08:44.415 AIO0 : 5.19 911.83 3.56 0.00 0.00 135541.19 1081.34 109051.90 00:08:44.415 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:44.415 Verification LBA range: start 0x4e2 length 0x4e2 00:08:44.415 AIO0 : 5.19 912.45 3.56 0.00 0.00 135414.56 1146.88 112407.35 00:08:44.416 =================================================================================================================== 00:08:44.416 Total : 29941.01 116.96 0.00 0.00 134387.87 365.36 322122.55 00:08:44.675 00:08:44.675 real 0m6.176s 00:08:44.675 user 0m11.614s 00:08:44.675 sys 0m0.328s 00:08:44.675 18:11:53 blockdev_general.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.675 18:11:53 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:44.675 ************************************ 00:08:44.675 END TEST bdev_verify 00:08:44.675 ************************************ 00:08:44.675 18:11:53 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:44.675 18:11:53 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:44.675 18:11:53 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.675 18:11:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:44.675 ************************************ 00:08:44.675 START TEST bdev_verify_big_io 00:08:44.675 ************************************ 00:08:44.675 18:11:53 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:44.675 [2024-07-24 18:11:53.157236] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:08:44.675 [2024-07-24 18:11:53.157279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138134 ] 00:08:44.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.675 EAL: Requested device 0000:b3:01.0 cannot be used 00:08:44.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.675 EAL: Requested device 0000:b3:01.1 cannot be used 00:08:44.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.675 EAL: Requested device 0000:b3:01.2 cannot be used 00:08:44.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.675 EAL: Requested device 0000:b3:01.3 cannot be used 00:08:44.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.675 EAL: Requested device 0000:b3:01.4 cannot be used 00:08:44.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.675 EAL: Requested device 0000:b3:01.5 cannot be used 00:08:44.675 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b3:01.6 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b3:01.7 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b3:02.0 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b3:02.1 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b3:02.2 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b3:02.3 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b3:02.4 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b3:02.5 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b3:02.6 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b3:02.7 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:01.0 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:01.1 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:01.2 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:01.3 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:01.4 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:01.5 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:01.6 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:01.7 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:02.0 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:02.1 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:02.2 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:02.3 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:02.4 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:02.5 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:02.6 cannot be used 00:08:44.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.676 EAL: Requested device 0000:b5:02.7 cannot be used 00:08:44.676 [2024-07-24 18:11:53.249017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:44.936 [2024-07-24 18:11:53.321856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.936 [2024-07-24 18:11:53.321858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.936 [2024-07-24 18:11:53.462150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:44.936 [2024-07-24 18:11:53.462192] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:44.936 [2024-07-24 18:11:53.462202] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:44.936 [2024-07-24 18:11:53.470162] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:44.936 [2024-07-24 18:11:53.470181] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:44.936 [2024-07-24 18:11:53.478179] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:44.936 [2024-07-24 18:11:53.478195] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:45.195 [2024-07-24 18:11:53.546306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:45.195 [2024-07-24 18:11:53.546348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.195 [2024-07-24 18:11:53.546361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16d1f40 00:08:45.195 [2024-07-24 18:11:53.546369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.195 [2024-07-24 18:11:53.547334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.195 [2024-07-24 18:11:53.547357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:45.195 [2024-07-24 18:11:53.694282] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:45.195 [2024-07-24 18:11:53.695029] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:45.195 [2024-07-24 18:11:53.696187] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:45.195 [2024-07-24 18:11:53.696929] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:45.195 [2024-07-24 18:11:53.698083] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:45.195 [2024-07-24 18:11:53.698807] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:45.195 [2024-07-24 18:11:53.699955] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:45.195 [2024-07-24 18:11:53.701160] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:45.195 [2024-07-24 18:11:53.701917] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:45.195 [2024-07-24 18:11:53.703107] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:45.195 [2024-07-24 18:11:53.703866] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:45.195 [2024-07-24 18:11:53.705069] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:45.195 [2024-07-24 18:11:53.705829] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:45.196 [2024-07-24 18:11:53.707024] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:45.196 [2024-07-24 18:11:53.707782] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:45.196 [2024-07-24 18:11:53.708945] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:45.196 [2024-07-24 18:11:53.727374] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:45.196 [2024-07-24 18:11:53.728914] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:45.196 Running I/O for 5 seconds... 00:08:51.768 00:08:51.768 Latency(us) 00:08:51.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.768 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x100 00:08:51.768 Malloc0 : 5.61 296.56 18.54 0.00 0.00 425459.24 560.33 1248224.87 00:08:51.768 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x100 length 0x100 00:08:51.768 Malloc0 : 5.98 278.43 17.40 0.00 0.00 416681.66 570.16 885837.00 00:08:51.768 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x80 00:08:51.768 Malloc1p0 : 6.10 52.44 3.28 0.00 0.00 2286978.49 1028.92 3650722.20 00:08:51.768 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x80 length 0x80 00:08:51.768 Malloc1p0 : 6.21 51.52 3.22 0.00 0.00 2190168.14 1002.70 3597035.11 00:08:51.768 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x80 00:08:51.768 Malloc1p1 : 6.10 52.43 3.28 0.00 0.00 2235617.56 1028.92 3543348.02 00:08:51.768 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x80 length 0x80 00:08:51.768 Malloc1p1 : 6.21 51.51 3.22 0.00 0.00 2138445.90 1238.63 3462817.38 00:08:51.768 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x20 00:08:51.768 Malloc2p0 : 5.80 38.62 2.41 0.00 0.00 756221.48 439.09 1355599.05 00:08:51.768 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x20 length 0x20 00:08:51.768 Malloc2p0 : 6.16 38.94 2.43 0.00 0.00 698190.40 475.14 1006632.96 00:08:51.768 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x20 00:08:51.768 Malloc2p1 : 5.87 40.88 2.56 0.00 0.00 718793.60 445.64 1335466.39 00:08:51.768 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x20 length 0x20 00:08:51.768 Malloc2p1 : 6.16 38.93 2.43 0.00 0.00 694263.76 468.58 993211.19 00:08:51.768 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x20 00:08:51.768 Malloc2p2 : 5.87 40.88 2.55 0.00 0.00 714185.52 445.64 1322044.62 00:08:51.768 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x20 length 0x20 00:08:51.768 Malloc2p2 : 6.17 38.93 2.43 0.00 0.00 689894.50 475.14 973078.53 00:08:51.768 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x20 00:08:51.768 Malloc2p3 : 5.87 40.87 2.55 0.00 0.00 709893.71 442.37 1301911.96 00:08:51.768 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x20 length 0x20 00:08:51.768 Malloc2p3 : 6.17 38.92 2.43 0.00 0.00 685533.15 465.31 959656.76 00:08:51.768 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x20 00:08:51.768 Malloc2p4 : 5.87 40.86 2.55 0.00 0.00 705290.00 448.92 1288490.19 00:08:51.768 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x20 length 0x20 00:08:51.768 Malloc2p4 : 6.17 38.92 2.43 0.00 0.00 680970.77 462.03 939524.10 00:08:51.768 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x20 00:08:51.768 Malloc2p5 : 5.87 40.86 2.55 0.00 0.00 701025.82 468.58 1275068.42 00:08:51.768 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x20 length 0x20 00:08:51.768 Malloc2p5 : 6.17 38.91 2.43 0.00 0.00 677088.77 468.58 926102.32 00:08:51.768 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x20 00:08:51.768 Malloc2p6 : 5.88 40.85 2.55 0.00 0.00 697282.59 455.48 1254935.76 00:08:51.768 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x20 length 0x20 00:08:51.768 Malloc2p6 : 6.17 41.46 2.59 0.00 0.00 635128.44 478.41 905969.66 00:08:51.768 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x20 00:08:51.768 Malloc2p7 : 5.88 40.84 2.55 0.00 0.00 692864.95 455.48 1234803.10 00:08:51.768 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x20 length 0x20 00:08:51.768 Malloc2p7 : 6.17 41.46 2.59 0.00 0.00 631176.07 475.14 892547.89 00:08:51.768 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x100 00:08:51.768 TestPT : 6.15 52.33 3.27 0.00 0.00 2092230.86 72980.89 2899102.92 00:08:51.768 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x100 length 0x100 00:08:51.768 TestPT : 6.18 56.99 3.56 0.00 0.00 1809759.81 4587.52 3261490.79 00:08:51.768 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x200 00:08:51.768 raid0 : 6.06 58.09 3.63 0.00 0.00 1858554.27 1114.11 3167538.38 00:08:51.768 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x200 length 0x200 00:08:51.768 raid0 : 6.10 129.20 8.07 0.00 0.00 939058.46 1795.69 1919313.51 00:08:51.768 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:51.768 Verification LBA range: start 0x0 length 0x200 00:08:51.769 concat0 : 6.11 62.89 3.93 0.00 0.00 1682898.90 1055.13 3060164.20 00:08:51.769 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:51.769 Verification LBA range: start 0x200 length 0x200 00:08:51.769 concat0 : 6.21 48.98 3.06 0.00 0.00 2476254.21 1101.00 3892314.11 00:08:51.769 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:51.769 Verification LBA range: start 0x0 length 0x100 00:08:51.769 raid1 : 6.17 70.03 4.38 0.00 0.00 1488949.65 1382.81 2939368.24 00:08:51.769 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:51.769 Verification LBA range: start 0x100 length 0x100 00:08:51.769 raid1 : 6.21 48.97 3.06 0.00 0.00 2421421.96 1389.36 3758096.38 00:08:51.769 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:08:51.769 Verification LBA range: start 0x0 length 0x4e 00:08:51.769 AIO0 : 6.20 99.78 6.24 0.00 0.00 625995.39 616.04 1785095.78 00:08:51.769 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:08:51.769 Verification LBA range: start 0x4e length 0x4e 00:08:51.769 AIO0 : 6.16 43.82 2.74 0.00 0.00 1632655.38 632.42 2469606.20 00:08:51.769 =================================================================================================================== 00:08:51.769 Total : 2095.06 130.94 0.00 0.00 1049521.71 439.09 3892314.11 00:08:51.769 00:08:51.769 real 0m7.248s 00:08:51.769 user 0m13.703s 00:08:51.769 sys 0m0.354s 00:08:51.769 18:12:00 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.769 18:12:00 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:51.769 ************************************ 00:08:51.769 END TEST bdev_verify_big_io 00:08:51.769 ************************************ 00:08:52.029 18:12:00 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:52.029 18:12:00 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:52.029 18:12:00 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.029 18:12:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:52.029 ************************************ 00:08:52.029 START TEST bdev_write_zeroes 00:08:52.029 ************************************ 00:08:52.029 18:12:00 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:52.029 [2024-07-24 18:12:00.478624] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:08:52.029 [2024-07-24 18:12:00.478669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139491 ] 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:01.0 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:01.1 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:01.2 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:01.3 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:01.4 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:01.5 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:01.6 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:01.7 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:02.0 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:02.1 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:02.2 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:02.3 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:02.4 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:02.5 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:02.6 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b3:02.7 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:01.0 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:01.1 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:01.2 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:01.3 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:01.4 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:01.5 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:01.6 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:01.7 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:02.0 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:02.1 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:02.2 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:02.3 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:02.4 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:02.5 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:02.6 cannot be used 00:08:52.029 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:52.029 EAL: Requested device 0000:b5:02.7 cannot be used 00:08:52.029 [2024-07-24 18:12:00.568942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.288 [2024-07-24 18:12:00.640008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.288 [2024-07-24 18:12:00.779193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:52.288 [2024-07-24 18:12:00.779236] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:52.288 [2024-07-24 18:12:00.779245] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:52.288 [2024-07-24 18:12:00.787203] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:52.288 [2024-07-24 18:12:00.787221] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:52.288 [2024-07-24 18:12:00.795214] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:52.288 [2024-07-24 18:12:00.795229] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:52.288 [2024-07-24 18:12:00.863202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:52.288 [2024-07-24 18:12:00.863235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.288 [2024-07-24 18:12:00.863246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x103ebd0 00:08:52.288 [2024-07-24 18:12:00.863254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.288 [2024-07-24 18:12:00.864371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.288 [2024-07-24 18:12:00.864393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:52.548 Running I/O for 1 seconds... 00:08:53.928 00:08:53.928 Latency(us) 00:08:53.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.928 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 Malloc0 : 1.03 7585.91 29.63 0.00 0.00 16861.61 458.75 29360.13 00:08:53.928 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 Malloc1p0 : 1.03 7578.82 29.60 0.00 0.00 16859.52 616.04 28940.70 00:08:53.928 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 Malloc1p1 : 1.03 7572.10 29.58 0.00 0.00 16846.44 612.76 28311.55 00:08:53.928 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 Malloc2p0 : 1.03 7565.40 29.55 0.00 0.00 16838.57 638.98 27682.41 00:08:53.928 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 Malloc2p1 : 1.03 7558.63 29.53 0.00 0.00 16826.04 622.59 27053.26 00:08:53.928 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 Malloc2p2 : 1.03 7551.38 29.50 0.00 0.00 16821.00 635.70 26424.12 00:08:53.928 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 Malloc2p3 : 1.03 7544.17 29.47 0.00 0.00 16808.47 625.87 25794.97 00:08:53.928 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 Malloc2p4 : 1.04 7536.93 29.44 0.00 0.00 16795.57 635.70 25165.82 00:08:53.928 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 Malloc2p5 : 1.04 7529.83 29.41 0.00 0.00 16783.94 632.42 24536.68 00:08:53.928 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 Malloc2p6 : 1.04 7522.50 29.38 0.00 0.00 16777.87 625.87 23907.53 00:08:53.928 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 Malloc2p7 : 1.04 7515.38 29.36 0.00 0.00 16770.74 632.42 23278.39 00:08:53.928 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 TestPT : 1.04 7508.26 29.33 0.00 0.00 16758.50 645.53 22649.24 00:08:53.928 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 raid0 : 1.04 7500.00 29.30 0.00 0.00 16742.77 1094.45 21600.67 00:08:53.928 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 concat0 : 1.04 7491.88 29.27 0.00 0.00 16716.16 1094.45 20447.23 00:08:53.928 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 raid1 : 1.04 7482.48 29.23 0.00 0.00 16688.28 1730.15 18769.51 00:08:53.928 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.928 AIO0 : 1.04 7476.66 29.21 0.00 0.00 16647.63 851.97 17825.79 00:08:53.928 =================================================================================================================== 00:08:53.928 Total : 120520.34 470.78 0.00 0.00 16783.94 458.75 29360.13 00:08:53.928 00:08:53.928 real 0m1.961s 00:08:53.928 user 0m1.617s 00:08:53.928 sys 0m0.282s 00:08:53.928 18:12:02 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.928 18:12:02 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:53.928 ************************************ 00:08:53.928 END TEST bdev_write_zeroes 00:08:53.928 ************************************ 00:08:53.928 18:12:02 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:53.928 18:12:02 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:53.928 18:12:02 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.928 18:12:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:53.928 ************************************ 00:08:53.928 START TEST bdev_json_nonenclosed 00:08:53.928 ************************************ 00:08:53.928 18:12:02 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:54.188 [2024-07-24 18:12:02.524447] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:08:54.188 [2024-07-24 18:12:02.524491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139905 ] 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:01.0 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:01.1 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:01.2 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:01.3 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:01.4 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:01.5 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:01.6 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:01.7 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:02.0 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:02.1 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:02.2 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:02.3 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:02.4 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:02.5 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:02.6 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b3:02.7 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:01.0 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:01.1 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:01.2 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:01.3 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:01.4 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:01.5 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:01.6 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:01.7 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:02.0 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:02.1 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:02.2 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:02.3 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:02.4 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:02.5 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:02.6 cannot be used 00:08:54.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.188 EAL: Requested device 0000:b5:02.7 cannot be used 00:08:54.188 [2024-07-24 18:12:02.616436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.188 [2024-07-24 18:12:02.690720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.188 [2024-07-24 18:12:02.690772] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:54.188 [2024-07-24 18:12:02.690800] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:54.188 [2024-07-24 18:12:02.690819] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.188 00:08:54.188 real 0m0.289s 00:08:54.188 user 0m0.169s 00:08:54.188 sys 0m0.118s 00:08:54.188 18:12:02 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.188 18:12:02 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:54.188 ************************************ 00:08:54.188 END TEST bdev_json_nonenclosed 00:08:54.188 ************************************ 00:08:54.448 18:12:02 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:54.448 18:12:02 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:54.448 18:12:02 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.448 18:12:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:54.448 ************************************ 00:08:54.448 START TEST bdev_json_nonarray 00:08:54.448 ************************************ 00:08:54.448 18:12:02 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:54.448 [2024-07-24 18:12:02.902738] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:08:54.448 [2024-07-24 18:12:02.902782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140106 ] 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:01.0 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:01.1 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:01.2 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:01.3 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:01.4 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:01.5 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:01.6 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:01.7 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:02.0 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:02.1 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:02.2 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:02.3 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:02.4 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:02.5 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:02.6 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b3:02.7 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b5:01.0 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b5:01.1 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.448 EAL: Requested device 0000:b5:01.2 cannot be used 00:08:54.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.449 EAL: Requested device 0000:b5:01.3 cannot be used 00:08:54.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.449 EAL: Requested device 0000:b5:01.4 cannot be used 00:08:54.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.449 EAL: Requested device 0000:b5:01.5 cannot be used 00:08:54.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.449 EAL: Requested device 0000:b5:01.6 cannot be used 00:08:54.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.449 EAL: Requested device 0000:b5:01.7 cannot be used 00:08:54.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.449 EAL: Requested device 0000:b5:02.0 cannot be used 00:08:54.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.449 EAL: Requested device 0000:b5:02.1 cannot be used 00:08:54.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.449 EAL: Requested device 0000:b5:02.2 cannot be used 00:08:54.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.449 EAL: Requested device 0000:b5:02.3 cannot be used 00:08:54.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.449 EAL: Requested device 0000:b5:02.4 cannot be used 00:08:54.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.449 EAL: Requested device 0000:b5:02.5 cannot be used 00:08:54.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.449 EAL: Requested device 0000:b5:02.6 cannot be used 00:08:54.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.449 EAL: Requested device 0000:b5:02.7 cannot be used 00:08:54.449 [2024-07-24 18:12:02.994379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.716 [2024-07-24 18:12:03.064904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.716 [2024-07-24 18:12:03.064958] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:54.716 [2024-07-24 18:12:03.064969] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:54.716 [2024-07-24 18:12:03.064976] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.716 00:08:54.716 real 0m0.285s 00:08:54.716 user 0m0.162s 00:08:54.716 sys 0m0.122s 00:08:54.716 18:12:03 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.716 18:12:03 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:54.716 ************************************ 00:08:54.716 END TEST bdev_json_nonarray 00:08:54.716 ************************************ 00:08:54.716 18:12:03 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:08:54.716 18:12:03 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:08:54.716 18:12:03 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:54.716 18:12:03 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.716 18:12:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:54.716 ************************************ 00:08:54.716 START TEST bdev_qos 00:08:54.716 ************************************ 00:08:54.716 18:12:03 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # qos_test_suite '' 00:08:54.716 18:12:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=2140173 00:08:54.717 18:12:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 2140173' 00:08:54.717 Process qos testing pid: 2140173 00:08:54.717 18:12:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:08:54.717 18:12:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:08:54.717 18:12:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 2140173 00:08:54.717 18:12:03 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # '[' -z 2140173 ']' 00:08:54.717 18:12:03 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.717 18:12:03 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.717 18:12:03 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.717 18:12:03 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.717 18:12:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:54.717 [2024-07-24 18:12:03.273636] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:08:54.717 [2024-07-24 18:12:03.273680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140173 ] 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:01.0 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:01.1 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:01.2 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:01.3 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:01.4 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:01.5 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:01.6 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:01.7 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:02.0 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:02.1 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:02.2 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:02.3 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:02.4 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:02.5 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:02.6 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b3:02.7 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b5:01.0 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b5:01.1 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b5:01.2 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b5:01.3 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b5:01.4 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b5:01.5 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b5:01.6 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b5:01.7 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b5:02.0 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b5:02.1 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.976 EAL: Requested device 0000:b5:02.2 cannot be used 00:08:54.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.977 EAL: Requested device 0000:b5:02.3 cannot be used 00:08:54.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.977 EAL: Requested device 0000:b5:02.4 cannot be used 00:08:54.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.977 EAL: Requested device 0000:b5:02.5 cannot be used 00:08:54.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.977 EAL: Requested device 0000:b5:02.6 cannot be used 00:08:54.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:54.977 EAL: Requested device 0000:b5:02.7 cannot be used 00:08:54.977 [2024-07-24 18:12:03.366118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.977 [2024-07-24 18:12:03.441224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # return 0 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:55.544 Malloc_0 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_0 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.544 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:55.544 [ 00:08:55.544 { 00:08:55.544 "name": "Malloc_0", 00:08:55.544 "aliases": [ 00:08:55.544 "bda58aa0-60dc-413c-9ae4-38e903c5f824" 00:08:55.544 ], 00:08:55.544 "product_name": "Malloc disk", 00:08:55.544 "block_size": 512, 00:08:55.544 "num_blocks": 262144, 00:08:55.544 "uuid": "bda58aa0-60dc-413c-9ae4-38e903c5f824", 00:08:55.544 "assigned_rate_limits": { 00:08:55.544 "rw_ios_per_sec": 0, 00:08:55.544 "rw_mbytes_per_sec": 0, 00:08:55.544 "r_mbytes_per_sec": 0, 00:08:55.544 "w_mbytes_per_sec": 0 00:08:55.544 }, 00:08:55.544 "claimed": false, 00:08:55.544 "zoned": false, 00:08:55.544 "supported_io_types": { 00:08:55.803 "read": true, 00:08:55.803 "write": true, 00:08:55.803 "unmap": true, 00:08:55.803 "flush": true, 00:08:55.803 "reset": true, 00:08:55.803 "nvme_admin": false, 00:08:55.803 "nvme_io": false, 00:08:55.803 "nvme_io_md": false, 00:08:55.803 "write_zeroes": true, 00:08:55.803 "zcopy": true, 00:08:55.803 "get_zone_info": false, 00:08:55.803 "zone_management": false, 00:08:55.803 "zone_append": false, 00:08:55.803 "compare": false, 00:08:55.803 "compare_and_write": false, 00:08:55.803 "abort": true, 00:08:55.803 "seek_hole": false, 00:08:55.803 "seek_data": false, 00:08:55.803 "copy": true, 00:08:55.803 "nvme_iov_md": false 00:08:55.803 }, 00:08:55.803 "memory_domains": [ 00:08:55.803 { 00:08:55.803 "dma_device_id": "system", 00:08:55.803 "dma_device_type": 1 00:08:55.803 }, 00:08:55.803 { 00:08:55.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.803 "dma_device_type": 2 00:08:55.803 } 00:08:55.803 ], 00:08:55.803 "driver_specific": {} 00:08:55.803 } 00:08:55.803 ] 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:55.803 Null_1 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Null_1 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:55.803 [ 00:08:55.803 { 00:08:55.803 "name": "Null_1", 00:08:55.803 "aliases": [ 00:08:55.803 "d924b7a9-ecfc-49c3-820e-32af138273b6" 00:08:55.803 ], 00:08:55.803 "product_name": "Null disk", 00:08:55.803 "block_size": 512, 00:08:55.803 "num_blocks": 262144, 00:08:55.803 "uuid": "d924b7a9-ecfc-49c3-820e-32af138273b6", 00:08:55.803 "assigned_rate_limits": { 00:08:55.803 "rw_ios_per_sec": 0, 00:08:55.803 "rw_mbytes_per_sec": 0, 00:08:55.803 "r_mbytes_per_sec": 0, 00:08:55.803 "w_mbytes_per_sec": 0 00:08:55.803 }, 00:08:55.803 "claimed": false, 00:08:55.803 "zoned": false, 00:08:55.803 "supported_io_types": { 00:08:55.803 "read": true, 00:08:55.803 "write": true, 00:08:55.803 "unmap": false, 00:08:55.803 "flush": false, 00:08:55.803 "reset": true, 00:08:55.803 "nvme_admin": false, 00:08:55.803 "nvme_io": false, 00:08:55.803 "nvme_io_md": false, 00:08:55.803 "write_zeroes": true, 00:08:55.803 "zcopy": false, 00:08:55.803 "get_zone_info": false, 00:08:55.803 "zone_management": false, 00:08:55.803 "zone_append": false, 00:08:55.803 "compare": false, 00:08:55.803 "compare_and_write": false, 00:08:55.803 "abort": true, 00:08:55.803 "seek_hole": false, 00:08:55.803 "seek_data": false, 00:08:55.803 "copy": false, 00:08:55.803 "nvme_iov_md": false 00:08:55.803 }, 00:08:55.803 "driver_specific": {} 00:08:55.803 } 00:08:55.803 ] 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:08:55.803 18:12:04 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:08:55.803 Running I/O for 60 seconds... 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 98141.47 392565.87 0.00 0.00 395264.00 0.00 0.00 ' 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=98141.47 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 98141 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=98141 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=24000 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 24000 -gt 1000 ']' 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 24000 Malloc_0 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 24000 IOPS Malloc_0 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.082 18:12:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:01.082 ************************************ 00:09:01.082 START TEST bdev_qos_iops 00:09:01.082 ************************************ 00:09:01.082 18:12:09 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # run_qos_test 24000 IOPS Malloc_0 00:09:01.082 18:12:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=24000 00:09:01.082 18:12:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:09:01.082 18:12:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:09:01.082 18:12:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:09:01.082 18:12:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:09:01.082 18:12:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:09:01.082 18:12:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:01.082 18:12:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:09:01.082 18:12:09 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:09:06.385 18:12:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 23986.60 95946.40 0.00 0.00 96672.00 0.00 0.00 ' 00:09:06.385 18:12:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:09:06.385 18:12:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:09:06.385 18:12:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=23986.60 00:09:06.385 18:12:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 23986 00:09:06.385 18:12:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=23986 00:09:06.385 18:12:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:09:06.385 18:12:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=21600 00:09:06.385 18:12:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=26400 00:09:06.385 18:12:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 23986 -lt 21600 ']' 00:09:06.385 18:12:14 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 23986 -gt 26400 ']' 00:09:06.385 00:09:06.385 real 0m5.189s 00:09:06.385 user 0m0.092s 00:09:06.385 sys 0m0.044s 00:09:06.385 18:12:14 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.385 18:12:14 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:09:06.385 ************************************ 00:09:06.385 END TEST bdev_qos_iops 00:09:06.385 ************************************ 00:09:06.385 18:12:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:09:06.385 18:12:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:09:06.385 18:12:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:09:06.385 18:12:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:09:06.385 18:12:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:06.385 18:12:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:09:06.385 18:12:14 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 33383.70 133534.82 0.00 0.00 135168.00 0.00 0.00 ' 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=135168.00 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 135168 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=135168 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=13 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 13 -lt 2 ']' 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 13 Null_1 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 13 BANDWIDTH Null_1 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.659 18:12:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:11.659 ************************************ 00:09:11.659 START TEST bdev_qos_bw 00:09:11.659 ************************************ 00:09:11.659 18:12:19 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # run_qos_test 13 BANDWIDTH Null_1 00:09:11.659 18:12:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=13 00:09:11.659 18:12:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:09:11.659 18:12:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:09:11.659 18:12:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:09:11.659 18:12:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:09:11.659 18:12:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:09:11.659 18:12:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:11.659 18:12:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:09:11.659 18:12:19 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 3330.29 13321.18 0.00 0.00 13576.00 0.00 0.00 ' 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=13576.00 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 13576 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=13576 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=13312 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=11980 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=14643 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 13576 -lt 11980 ']' 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 13576 -gt 14643 ']' 00:09:16.928 00:09:16.928 real 0m5.201s 00:09:16.928 user 0m0.095s 00:09:16.928 sys 0m0.039s 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:09:16.928 ************************************ 00:09:16.928 END TEST bdev_qos_bw 00:09:16.928 ************************************ 00:09:16.928 18:12:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:09:16.928 18:12:25 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.928 18:12:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:16.928 18:12:25 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.928 18:12:25 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:09:16.928 18:12:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:16.928 18:12:25 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.928 18:12:25 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:16.928 ************************************ 00:09:16.928 START TEST bdev_qos_ro_bw 00:09:16.928 ************************************ 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:09:16.928 18:12:25 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 512.30 2049.19 0.00 0.00 2060.00 0.00 0.00 ' 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2060.00 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2060 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2060 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2060 -lt 1843 ']' 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2060 -gt 2252 ']' 00:09:22.203 00:09:22.203 real 0m5.150s 00:09:22.203 user 0m0.086s 00:09:22.203 sys 0m0.039s 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.203 18:12:30 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:09:22.203 ************************************ 00:09:22.203 END TEST bdev_qos_ro_bw 00:09:22.203 ************************************ 00:09:22.203 18:12:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:09:22.203 18:12:30 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.203 18:12:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:22.462 18:12:30 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.462 18:12:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:09:22.462 18:12:30 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.462 18:12:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:22.462 00:09:22.462 Latency(us) 00:09:22.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.462 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:22.462 Malloc_0 : 26.53 33170.27 129.57 0.00 0.00 7641.81 1336.93 503316.48 00:09:22.463 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:22.463 Null_1 : 26.63 32983.17 128.84 0.00 0.00 7744.39 504.63 95630.13 00:09:22.463 =================================================================================================================== 00:09:22.463 Total : 66153.45 258.41 0.00 0.00 7693.05 504.63 503316.48 00:09:22.463 0 00:09:22.463 18:12:30 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.463 18:12:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 2140173 00:09:22.463 18:12:30 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # '[' -z 2140173 ']' 00:09:22.463 18:12:30 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # kill -0 2140173 00:09:22.463 18:12:30 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # uname 00:09:22.463 18:12:30 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.463 18:12:30 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2140173 00:09:22.463 18:12:31 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:22.463 18:12:31 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:22.463 18:12:31 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2140173' 00:09:22.463 killing process with pid 2140173 00:09:22.463 18:12:31 blockdev_general.bdev_qos -- common/autotest_common.sh@969 -- # kill 2140173 00:09:22.463 Received shutdown signal, test time was about 26.688266 seconds 00:09:22.463 00:09:22.463 Latency(us) 00:09:22.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.463 =================================================================================================================== 00:09:22.463 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:22.463 18:12:31 blockdev_general.bdev_qos -- common/autotest_common.sh@974 -- # wait 2140173 00:09:22.723 18:12:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:09:22.723 00:09:22.723 real 0m27.978s 00:09:22.723 user 0m28.559s 00:09:22.723 sys 0m0.771s 00:09:22.723 18:12:31 blockdev_general.bdev_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.723 18:12:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:22.723 ************************************ 00:09:22.723 END TEST bdev_qos 00:09:22.723 ************************************ 00:09:22.723 18:12:31 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:09:22.723 18:12:31 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:22.723 18:12:31 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.723 18:12:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:22.723 ************************************ 00:09:22.723 START TEST bdev_qd_sampling 00:09:22.723 ************************************ 00:09:22.723 18:12:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # qd_sampling_test_suite '' 00:09:22.723 18:12:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:09:22.723 18:12:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:09:22.723 18:12:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=2145487 00:09:22.723 18:12:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 2145487' 00:09:22.723 Process bdev QD sampling period testing pid: 2145487 00:09:22.723 18:12:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:09:22.723 18:12:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 2145487 00:09:22.723 18:12:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # '[' -z 2145487 ']' 00:09:22.723 18:12:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.723 18:12:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.723 18:12:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.723 18:12:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.723 18:12:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:22.982 [2024-07-24 18:12:31.331723] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:09:22.982 [2024-07-24 18:12:31.331766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145487 ] 00:09:22.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.982 EAL: Requested device 0000:b3:01.0 cannot be used 00:09:22.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.982 EAL: Requested device 0000:b3:01.1 cannot be used 00:09:22.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.982 EAL: Requested device 0000:b3:01.2 cannot be used 00:09:22.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.982 EAL: Requested device 0000:b3:01.3 cannot be used 00:09:22.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.982 EAL: Requested device 0000:b3:01.4 cannot be used 00:09:22.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.982 EAL: Requested device 0000:b3:01.5 cannot be used 00:09:22.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.982 EAL: Requested device 0000:b3:01.6 cannot be used 00:09:22.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.982 EAL: Requested device 0000:b3:01.7 cannot be used 00:09:22.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.982 EAL: Requested device 0000:b3:02.0 cannot be used 00:09:22.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.982 EAL: Requested device 0000:b3:02.1 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b3:02.2 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b3:02.3 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b3:02.4 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b3:02.5 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b3:02.6 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b3:02.7 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:01.0 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:01.1 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:01.2 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:01.3 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:01.4 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:01.5 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:01.6 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:01.7 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:02.0 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:02.1 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:02.2 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:02.3 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:02.4 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:02.5 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:02.6 cannot be used 00:09:22.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.983 EAL: Requested device 0000:b5:02.7 cannot be used 00:09:22.983 [2024-07-24 18:12:31.423190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:22.983 [2024-07-24 18:12:31.497953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.983 [2024-07-24 18:12:31.497955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.551 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.551 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # return 0 00:09:23.551 18:12:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:09:23.551 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.551 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:23.810 Malloc_QD 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_QD 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # local i 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.810 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:23.810 [ 00:09:23.810 { 00:09:23.810 "name": "Malloc_QD", 00:09:23.810 "aliases": [ 00:09:23.810 "63fb1760-8d89-4e8b-91f0-ef80e3da3d62" 00:09:23.810 ], 00:09:23.810 "product_name": "Malloc disk", 00:09:23.810 "block_size": 512, 00:09:23.810 "num_blocks": 262144, 00:09:23.810 "uuid": "63fb1760-8d89-4e8b-91f0-ef80e3da3d62", 00:09:23.810 "assigned_rate_limits": { 00:09:23.810 "rw_ios_per_sec": 0, 00:09:23.810 "rw_mbytes_per_sec": 0, 00:09:23.810 "r_mbytes_per_sec": 0, 00:09:23.810 "w_mbytes_per_sec": 0 00:09:23.810 }, 00:09:23.810 "claimed": false, 00:09:23.810 "zoned": false, 00:09:23.810 "supported_io_types": { 00:09:23.810 "read": true, 00:09:23.810 "write": true, 00:09:23.810 "unmap": true, 00:09:23.810 "flush": true, 00:09:23.810 "reset": true, 00:09:23.810 "nvme_admin": false, 00:09:23.810 "nvme_io": false, 00:09:23.810 "nvme_io_md": false, 00:09:23.810 "write_zeroes": true, 00:09:23.810 "zcopy": true, 00:09:23.810 "get_zone_info": false, 00:09:23.810 "zone_management": false, 00:09:23.810 "zone_append": false, 00:09:23.810 "compare": false, 00:09:23.810 "compare_and_write": false, 00:09:23.810 "abort": true, 00:09:23.810 "seek_hole": false, 00:09:23.810 "seek_data": false, 00:09:23.810 "copy": true, 00:09:23.810 "nvme_iov_md": false 00:09:23.810 }, 00:09:23.810 "memory_domains": [ 00:09:23.810 { 00:09:23.810 "dma_device_id": "system", 00:09:23.810 "dma_device_type": 1 00:09:23.810 }, 00:09:23.810 { 00:09:23.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.810 "dma_device_type": 2 00:09:23.810 } 00:09:23.810 ], 00:09:23.811 "driver_specific": {} 00:09:23.811 } 00:09:23.811 ] 00:09:23.811 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.811 18:12:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@907 -- # return 0 00:09:23.811 18:12:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:09:23.811 18:12:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:23.811 Running I/O for 5 seconds... 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:09:25.718 "tick_rate": 2500000000, 00:09:25.718 "ticks": 7813088740252490, 00:09:25.718 "bdevs": [ 00:09:25.718 { 00:09:25.718 "name": "Malloc_QD", 00:09:25.718 "bytes_read": 984658432, 00:09:25.718 "num_read_ops": 240388, 00:09:25.718 "bytes_written": 0, 00:09:25.718 "num_write_ops": 0, 00:09:25.718 "bytes_unmapped": 0, 00:09:25.718 "num_unmap_ops": 0, 00:09:25.718 "bytes_copied": 0, 00:09:25.718 "num_copy_ops": 0, 00:09:25.718 "read_latency_ticks": 2469300446314, 00:09:25.718 "max_read_latency_ticks": 12754700, 00:09:25.718 "min_read_latency_ticks": 250078, 00:09:25.718 "write_latency_ticks": 0, 00:09:25.718 "max_write_latency_ticks": 0, 00:09:25.718 "min_write_latency_ticks": 0, 00:09:25.718 "unmap_latency_ticks": 0, 00:09:25.718 "max_unmap_latency_ticks": 0, 00:09:25.718 "min_unmap_latency_ticks": 0, 00:09:25.718 "copy_latency_ticks": 0, 00:09:25.718 "max_copy_latency_ticks": 0, 00:09:25.718 "min_copy_latency_ticks": 0, 00:09:25.718 "io_error": {}, 00:09:25.718 "queue_depth_polling_period": 10, 00:09:25.718 "queue_depth": 512, 00:09:25.718 "io_time": 30, 00:09:25.718 "weighted_io_time": 15360 00:09:25.718 } 00:09:25.718 ] 00:09:25.718 }' 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:25.718 00:09:25.718 Latency(us) 00:09:25.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.718 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:25.718 Malloc_QD : 2.01 61757.68 241.24 0.00 0.00 4135.87 1225.52 4482.66 00:09:25.718 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:25.718 Malloc_QD : 2.01 62638.97 244.68 0.00 0.00 4078.24 924.06 5111.81 00:09:25.718 =================================================================================================================== 00:09:25.718 Total : 124396.65 485.92 0.00 0.00 4106.85 924.06 5111.81 00:09:25.718 0 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 2145487 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # '[' -z 2145487 ']' 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # kill -0 2145487 00:09:25.718 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # uname 00:09:25.978 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.978 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2145487 00:09:25.978 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:25.978 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:25.978 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2145487' 00:09:25.978 killing process with pid 2145487 00:09:25.978 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@969 -- # kill 2145487 00:09:25.978 Received shutdown signal, test time was about 2.088404 seconds 00:09:25.978 00:09:25.978 Latency(us) 00:09:25.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.978 =================================================================================================================== 00:09:25.978 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:25.978 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@974 -- # wait 2145487 00:09:25.978 18:12:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:09:25.978 00:09:25.978 real 0m3.254s 00:09:25.978 user 0m6.393s 00:09:25.978 sys 0m0.363s 00:09:25.978 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.978 18:12:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:25.978 ************************************ 00:09:25.978 END TEST bdev_qd_sampling 00:09:25.978 ************************************ 00:09:26.238 18:12:34 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:09:26.238 18:12:34 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:26.238 18:12:34 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.238 18:12:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:26.238 ************************************ 00:09:26.238 START TEST bdev_error 00:09:26.238 ************************************ 00:09:26.238 18:12:34 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # error_test_suite '' 00:09:26.238 18:12:34 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:09:26.238 18:12:34 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:09:26.238 18:12:34 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:09:26.238 18:12:34 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=2146057 00:09:26.238 18:12:34 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 2146057' 00:09:26.238 Process error testing pid: 2146057 00:09:26.238 18:12:34 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:09:26.238 18:12:34 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 2146057 00:09:26.238 18:12:34 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 2146057 ']' 00:09:26.238 18:12:34 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.238 18:12:34 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.238 18:12:34 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.238 18:12:34 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.238 18:12:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:26.238 [2024-07-24 18:12:34.677109] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:09:26.238 [2024-07-24 18:12:34.677152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146057 ] 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:01.0 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:01.1 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:01.2 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:01.3 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:01.4 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:01.5 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:01.6 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:01.7 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:02.0 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:02.1 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:02.2 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:02.3 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:02.4 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:02.5 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:02.6 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b3:02.7 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:01.0 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:01.1 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:01.2 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:01.3 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:01.4 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:01.5 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:01.6 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:01.7 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:02.0 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:02.1 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:02.2 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:02.3 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:02.4 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:02.5 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:02.6 cannot be used 00:09:26.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.238 EAL: Requested device 0000:b5:02.7 cannot be used 00:09:26.238 [2024-07-24 18:12:34.769159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.497 [2024-07-24 18:12:34.843065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:09:27.066 18:12:35 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:27.066 Dev_1 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.066 18:12:35 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.066 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:27.066 [ 00:09:27.066 { 00:09:27.066 "name": "Dev_1", 00:09:27.066 "aliases": [ 00:09:27.066 "9e931b2b-aa04-4a12-a0d0-1e40ef54804c" 00:09:27.066 ], 00:09:27.066 "product_name": "Malloc disk", 00:09:27.066 "block_size": 512, 00:09:27.066 "num_blocks": 262144, 00:09:27.066 "uuid": "9e931b2b-aa04-4a12-a0d0-1e40ef54804c", 00:09:27.066 "assigned_rate_limits": { 00:09:27.066 "rw_ios_per_sec": 0, 00:09:27.066 "rw_mbytes_per_sec": 0, 00:09:27.066 "r_mbytes_per_sec": 0, 00:09:27.066 "w_mbytes_per_sec": 0 00:09:27.066 }, 00:09:27.066 "claimed": false, 00:09:27.066 "zoned": false, 00:09:27.066 "supported_io_types": { 00:09:27.066 "read": true, 00:09:27.066 "write": true, 00:09:27.066 "unmap": true, 00:09:27.066 "flush": true, 00:09:27.066 "reset": true, 00:09:27.066 "nvme_admin": false, 00:09:27.066 "nvme_io": false, 00:09:27.066 "nvme_io_md": false, 00:09:27.066 "write_zeroes": true, 00:09:27.066 "zcopy": true, 00:09:27.066 "get_zone_info": false, 00:09:27.066 "zone_management": false, 00:09:27.066 "zone_append": false, 00:09:27.066 "compare": false, 00:09:27.066 "compare_and_write": false, 00:09:27.066 "abort": true, 00:09:27.066 "seek_hole": false, 00:09:27.066 "seek_data": false, 00:09:27.066 "copy": true, 00:09:27.066 "nvme_iov_md": false 00:09:27.066 }, 00:09:27.066 "memory_domains": [ 00:09:27.067 { 00:09:27.067 "dma_device_id": "system", 00:09:27.067 "dma_device_type": 1 00:09:27.067 }, 00:09:27.067 { 00:09:27.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.067 "dma_device_type": 2 00:09:27.067 } 00:09:27.067 ], 00:09:27.067 "driver_specific": {} 00:09:27.067 } 00:09:27.067 ] 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:09:27.067 18:12:35 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 true 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 18:12:35 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 Dev_2 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 18:12:35 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 [ 00:09:27.067 { 00:09:27.067 "name": "Dev_2", 00:09:27.067 "aliases": [ 00:09:27.067 "e99ea52d-1651-4817-970b-0dc1acdce728" 00:09:27.067 ], 00:09:27.067 "product_name": "Malloc disk", 00:09:27.067 "block_size": 512, 00:09:27.067 "num_blocks": 262144, 00:09:27.067 "uuid": "e99ea52d-1651-4817-970b-0dc1acdce728", 00:09:27.067 "assigned_rate_limits": { 00:09:27.067 "rw_ios_per_sec": 0, 00:09:27.067 "rw_mbytes_per_sec": 0, 00:09:27.067 "r_mbytes_per_sec": 0, 00:09:27.067 "w_mbytes_per_sec": 0 00:09:27.067 }, 00:09:27.067 "claimed": false, 00:09:27.067 "zoned": false, 00:09:27.067 "supported_io_types": { 00:09:27.067 "read": true, 00:09:27.067 "write": true, 00:09:27.067 "unmap": true, 00:09:27.067 "flush": true, 00:09:27.067 "reset": true, 00:09:27.067 "nvme_admin": false, 00:09:27.067 "nvme_io": false, 00:09:27.067 "nvme_io_md": false, 00:09:27.067 "write_zeroes": true, 00:09:27.067 "zcopy": true, 00:09:27.067 "get_zone_info": false, 00:09:27.067 "zone_management": false, 00:09:27.067 "zone_append": false, 00:09:27.067 "compare": false, 00:09:27.067 "compare_and_write": false, 00:09:27.067 "abort": true, 00:09:27.067 "seek_hole": false, 00:09:27.067 "seek_data": false, 00:09:27.067 "copy": true, 00:09:27.067 "nvme_iov_md": false 00:09:27.067 }, 00:09:27.067 "memory_domains": [ 00:09:27.067 { 00:09:27.067 "dma_device_id": "system", 00:09:27.067 "dma_device_type": 1 00:09:27.067 }, 00:09:27.067 { 00:09:27.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.067 "dma_device_type": 2 00:09:27.067 } 00:09:27.067 ], 00:09:27.067 "driver_specific": {} 00:09:27.067 } 00:09:27.067 ] 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:09:27.067 18:12:35 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:27.067 18:12:35 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.067 18:12:35 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:09:27.067 18:12:35 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:27.326 Running I/O for 5 seconds... 00:09:28.263 18:12:36 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 2146057 00:09:28.263 18:12:36 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 2146057' 00:09:28.263 Process is existed as continue on error is set. Pid: 2146057 00:09:28.263 18:12:36 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:09:28.263 18:12:36 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.263 18:12:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:28.263 18:12:36 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.263 18:12:36 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:09:28.263 18:12:36 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.263 18:12:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:28.263 18:12:36 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.263 18:12:36 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:09:28.263 Timeout while waiting for response: 00:09:28.263 00:09:28.263 00:09:32.532 00:09:32.532 Latency(us) 00:09:32.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.532 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:32.532 EE_Dev_1 : 0.93 59535.03 232.56 5.39 0.00 266.55 88.47 439.09 00:09:32.532 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:32.532 Dev_2 : 5.00 127871.27 499.50 0.00 0.00 122.95 44.44 18559.80 00:09:32.532 =================================================================================================================== 00:09:32.532 Total : 187406.30 732.06 5.39 0.00 134.38 44.44 18559.80 00:09:33.100 18:12:41 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 2146057 00:09:33.101 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # '[' -z 2146057 ']' 00:09:33.101 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # kill -0 2146057 00:09:33.101 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # uname 00:09:33.101 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.101 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2146057 00:09:33.360 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:33.360 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:33.360 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2146057' 00:09:33.360 killing process with pid 2146057 00:09:33.360 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@969 -- # kill 2146057 00:09:33.360 Received shutdown signal, test time was about 5.000000 seconds 00:09:33.360 00:09:33.360 Latency(us) 00:09:33.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.360 =================================================================================================================== 00:09:33.360 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:33.360 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@974 -- # wait 2146057 00:09:33.360 18:12:41 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=2147391 00:09:33.360 18:12:41 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:09:33.360 18:12:41 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 2147391' 00:09:33.360 Process error testing pid: 2147391 00:09:33.360 18:12:41 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 2147391 00:09:33.360 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 2147391 ']' 00:09:33.360 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.360 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.360 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.360 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.360 18:12:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:33.620 [2024-07-24 18:12:41.979985] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:09:33.620 [2024-07-24 18:12:41.980035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147391 ] 00:09:33.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.620 EAL: Requested device 0000:b3:01.0 cannot be used 00:09:33.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.620 EAL: Requested device 0000:b3:01.1 cannot be used 00:09:33.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.620 EAL: Requested device 0000:b3:01.2 cannot be used 00:09:33.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.620 EAL: Requested device 0000:b3:01.3 cannot be used 00:09:33.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.620 EAL: Requested device 0000:b3:01.4 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b3:01.5 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b3:01.6 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b3:01.7 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b3:02.0 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b3:02.1 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b3:02.2 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b3:02.3 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b3:02.4 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b3:02.5 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b3:02.6 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b3:02.7 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:01.0 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:01.1 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:01.2 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:01.3 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:01.4 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:01.5 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:01.6 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:01.7 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:02.0 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:02.1 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:02.2 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:02.3 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:02.4 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:02.5 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:02.6 cannot be used 00:09:33.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.621 EAL: Requested device 0000:b5:02.7 cannot be used 00:09:33.621 [2024-07-24 18:12:42.072319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.621 [2024-07-24 18:12:42.145470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.189 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.189 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:09:34.190 18:12:42 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:34.190 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.190 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.450 Dev_1 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.450 18:12:42 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.450 [ 00:09:34.450 { 00:09:34.450 "name": "Dev_1", 00:09:34.450 "aliases": [ 00:09:34.450 "27e8d7c7-fd9d-438f-8532-118d38367a0b" 00:09:34.450 ], 00:09:34.450 "product_name": "Malloc disk", 00:09:34.450 "block_size": 512, 00:09:34.450 "num_blocks": 262144, 00:09:34.450 "uuid": "27e8d7c7-fd9d-438f-8532-118d38367a0b", 00:09:34.450 "assigned_rate_limits": { 00:09:34.450 "rw_ios_per_sec": 0, 00:09:34.450 "rw_mbytes_per_sec": 0, 00:09:34.450 "r_mbytes_per_sec": 0, 00:09:34.450 "w_mbytes_per_sec": 0 00:09:34.450 }, 00:09:34.450 "claimed": false, 00:09:34.450 "zoned": false, 00:09:34.450 "supported_io_types": { 00:09:34.450 "read": true, 00:09:34.450 "write": true, 00:09:34.450 "unmap": true, 00:09:34.450 "flush": true, 00:09:34.450 "reset": true, 00:09:34.450 "nvme_admin": false, 00:09:34.450 "nvme_io": false, 00:09:34.450 "nvme_io_md": false, 00:09:34.450 "write_zeroes": true, 00:09:34.450 "zcopy": true, 00:09:34.450 "get_zone_info": false, 00:09:34.450 "zone_management": false, 00:09:34.450 "zone_append": false, 00:09:34.450 "compare": false, 00:09:34.450 "compare_and_write": false, 00:09:34.450 "abort": true, 00:09:34.450 "seek_hole": false, 00:09:34.450 "seek_data": false, 00:09:34.450 "copy": true, 00:09:34.450 "nvme_iov_md": false 00:09:34.450 }, 00:09:34.450 "memory_domains": [ 00:09:34.450 { 00:09:34.450 "dma_device_id": "system", 00:09:34.450 "dma_device_type": 1 00:09:34.450 }, 00:09:34.450 { 00:09:34.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.450 "dma_device_type": 2 00:09:34.450 } 00:09:34.450 ], 00:09:34.450 "driver_specific": {} 00:09:34.450 } 00:09:34.450 ] 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:09:34.450 18:12:42 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.450 true 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.450 18:12:42 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.450 Dev_2 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.450 18:12:42 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.450 [ 00:09:34.450 { 00:09:34.450 "name": "Dev_2", 00:09:34.450 "aliases": [ 00:09:34.450 "82bd0726-98bc-4f82-bd93-c55fffdfeac6" 00:09:34.450 ], 00:09:34.450 "product_name": "Malloc disk", 00:09:34.450 "block_size": 512, 00:09:34.450 "num_blocks": 262144, 00:09:34.450 "uuid": "82bd0726-98bc-4f82-bd93-c55fffdfeac6", 00:09:34.450 "assigned_rate_limits": { 00:09:34.450 "rw_ios_per_sec": 0, 00:09:34.450 "rw_mbytes_per_sec": 0, 00:09:34.450 "r_mbytes_per_sec": 0, 00:09:34.450 "w_mbytes_per_sec": 0 00:09:34.450 }, 00:09:34.450 "claimed": false, 00:09:34.450 "zoned": false, 00:09:34.450 "supported_io_types": { 00:09:34.450 "read": true, 00:09:34.450 "write": true, 00:09:34.450 "unmap": true, 00:09:34.450 "flush": true, 00:09:34.450 "reset": true, 00:09:34.450 "nvme_admin": false, 00:09:34.450 "nvme_io": false, 00:09:34.450 "nvme_io_md": false, 00:09:34.450 "write_zeroes": true, 00:09:34.450 "zcopy": true, 00:09:34.450 "get_zone_info": false, 00:09:34.450 "zone_management": false, 00:09:34.450 "zone_append": false, 00:09:34.450 "compare": false, 00:09:34.450 "compare_and_write": false, 00:09:34.450 "abort": true, 00:09:34.450 "seek_hole": false, 00:09:34.450 "seek_data": false, 00:09:34.450 "copy": true, 00:09:34.450 "nvme_iov_md": false 00:09:34.450 }, 00:09:34.450 "memory_domains": [ 00:09:34.450 { 00:09:34.450 "dma_device_id": "system", 00:09:34.450 "dma_device_type": 1 00:09:34.450 }, 00:09:34.450 { 00:09:34.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.450 "dma_device_type": 2 00:09:34.450 } 00:09:34.450 ], 00:09:34.450 "driver_specific": {} 00:09:34.450 } 00:09:34.450 ] 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:09:34.450 18:12:42 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.450 18:12:42 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:34.450 18:12:42 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 2147391 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # local es=0 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2147391 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@638 -- # local arg=wait 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # type -t wait 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:34.450 18:12:42 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # wait 2147391 00:09:34.450 Running I/O for 5 seconds... 00:09:34.450 task offset: 81672 on job bdev=EE_Dev_1 fails 00:09:34.450 00:09:34.450 Latency(us) 00:09:34.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.450 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:34.450 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:09:34.450 EE_Dev_1 : 0.00 44000.00 171.88 10000.00 0.00 246.84 88.88 439.09 00:09:34.450 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:34.450 Dev_2 : 0.00 27562.45 107.67 0.00 0.00 431.43 86.43 799.54 00:09:34.450 =================================================================================================================== 00:09:34.450 Total : 71562.45 279.54 10000.00 0.00 346.96 86.43 799.54 00:09:34.450 [2024-07-24 18:12:43.001721] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:34.450 request: 00:09:34.450 { 00:09:34.450 "method": "perform_tests", 00:09:34.450 "req_id": 1 00:09:34.450 } 00:09:34.450 Got JSON-RPC error response 00:09:34.450 response: 00:09:34.450 { 00:09:34.450 "code": -32603, 00:09:34.450 "message": "bdevperf failed with error Operation not permitted" 00:09:34.451 } 00:09:34.710 18:12:43 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # es=255 00:09:34.710 18:12:43 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:34.710 18:12:43 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # es=127 00:09:34.710 18:12:43 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # case "$es" in 00:09:34.710 18:12:43 blockdev_general.bdev_error -- common/autotest_common.sh@670 -- # es=1 00:09:34.710 18:12:43 blockdev_general.bdev_error -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:34.710 00:09:34.710 real 0m8.612s 00:09:34.710 user 0m8.843s 00:09:34.710 sys 0m0.725s 00:09:34.710 18:12:43 blockdev_general.bdev_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.710 18:12:43 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:34.710 ************************************ 00:09:34.710 END TEST bdev_error 00:09:34.710 ************************************ 00:09:34.710 18:12:43 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:09:34.710 18:12:43 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.710 18:12:43 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.710 18:12:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:34.971 ************************************ 00:09:34.971 START TEST bdev_stat 00:09:34.971 ************************************ 00:09:34.971 18:12:43 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # stat_test_suite '' 00:09:34.971 18:12:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:09:34.971 18:12:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=2147681 00:09:34.971 18:12:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 2147681' 00:09:34.971 Process Bdev IO statistics testing pid: 2147681 00:09:34.971 18:12:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:09:34.971 18:12:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:09:34.971 18:12:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 2147681 00:09:34.971 18:12:43 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # '[' -z 2147681 ']' 00:09:34.971 18:12:43 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.971 18:12:43 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.971 18:12:43 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.971 18:12:43 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.971 18:12:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:34.971 [2024-07-24 18:12:43.374129] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:09:34.971 [2024-07-24 18:12:43.374174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147681 ] 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:01.0 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:01.1 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:01.2 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:01.3 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:01.4 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:01.5 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:01.6 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:01.7 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:02.0 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:02.1 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:02.2 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:02.3 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:02.4 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:02.5 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:02.6 cannot be used 00:09:34.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.971 EAL: Requested device 0000:b3:02.7 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:01.0 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:01.1 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:01.2 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:01.3 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:01.4 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:01.5 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:01.6 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:01.7 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:02.0 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:02.1 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:02.2 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:02.3 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:02.4 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:02.5 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:02.6 cannot be used 00:09:34.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:34.972 EAL: Requested device 0000:b5:02.7 cannot be used 00:09:34.972 [2024-07-24 18:12:43.467559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:34.972 [2024-07-24 18:12:43.542290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.972 [2024-07-24 18:12:43.542293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.910 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.910 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # return 0 00:09:35.910 18:12:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:09:35.910 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:35.911 Malloc_STAT 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_STAT 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # local i 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:35.911 [ 00:09:35.911 { 00:09:35.911 "name": "Malloc_STAT", 00:09:35.911 "aliases": [ 00:09:35.911 "a93b7c88-c126-4f37-a27b-2251d81cd51c" 00:09:35.911 ], 00:09:35.911 "product_name": "Malloc disk", 00:09:35.911 "block_size": 512, 00:09:35.911 "num_blocks": 262144, 00:09:35.911 "uuid": "a93b7c88-c126-4f37-a27b-2251d81cd51c", 00:09:35.911 "assigned_rate_limits": { 00:09:35.911 "rw_ios_per_sec": 0, 00:09:35.911 "rw_mbytes_per_sec": 0, 00:09:35.911 "r_mbytes_per_sec": 0, 00:09:35.911 "w_mbytes_per_sec": 0 00:09:35.911 }, 00:09:35.911 "claimed": false, 00:09:35.911 "zoned": false, 00:09:35.911 "supported_io_types": { 00:09:35.911 "read": true, 00:09:35.911 "write": true, 00:09:35.911 "unmap": true, 00:09:35.911 "flush": true, 00:09:35.911 "reset": true, 00:09:35.911 "nvme_admin": false, 00:09:35.911 "nvme_io": false, 00:09:35.911 "nvme_io_md": false, 00:09:35.911 "write_zeroes": true, 00:09:35.911 "zcopy": true, 00:09:35.911 "get_zone_info": false, 00:09:35.911 "zone_management": false, 00:09:35.911 "zone_append": false, 00:09:35.911 "compare": false, 00:09:35.911 "compare_and_write": false, 00:09:35.911 "abort": true, 00:09:35.911 "seek_hole": false, 00:09:35.911 "seek_data": false, 00:09:35.911 "copy": true, 00:09:35.911 "nvme_iov_md": false 00:09:35.911 }, 00:09:35.911 "memory_domains": [ 00:09:35.911 { 00:09:35.911 "dma_device_id": "system", 00:09:35.911 "dma_device_type": 1 00:09:35.911 }, 00:09:35.911 { 00:09:35.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.911 "dma_device_type": 2 00:09:35.911 } 00:09:35.911 ], 00:09:35.911 "driver_specific": {} 00:09:35.911 } 00:09:35.911 ] 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- common/autotest_common.sh@907 -- # return 0 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:09:35.911 18:12:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:35.911 Running I/O for 10 seconds... 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:09:37.818 "tick_rate": 2500000000, 00:09:37.818 "ticks": 7813118767377168, 00:09:37.818 "bdevs": [ 00:09:37.818 { 00:09:37.818 "name": "Malloc_STAT", 00:09:37.818 "bytes_read": 1005629952, 00:09:37.818 "num_read_ops": 245508, 00:09:37.818 "bytes_written": 0, 00:09:37.818 "num_write_ops": 0, 00:09:37.818 "bytes_unmapped": 0, 00:09:37.818 "num_unmap_ops": 0, 00:09:37.818 "bytes_copied": 0, 00:09:37.818 "num_copy_ops": 0, 00:09:37.818 "read_latency_ticks": 2472098584274, 00:09:37.818 "max_read_latency_ticks": 12259102, 00:09:37.818 "min_read_latency_ticks": 197304, 00:09:37.818 "write_latency_ticks": 0, 00:09:37.818 "max_write_latency_ticks": 0, 00:09:37.818 "min_write_latency_ticks": 0, 00:09:37.818 "unmap_latency_ticks": 0, 00:09:37.818 "max_unmap_latency_ticks": 0, 00:09:37.818 "min_unmap_latency_ticks": 0, 00:09:37.818 "copy_latency_ticks": 0, 00:09:37.818 "max_copy_latency_ticks": 0, 00:09:37.818 "min_copy_latency_ticks": 0, 00:09:37.818 "io_error": {} 00:09:37.818 } 00:09:37.818 ] 00:09:37.818 }' 00:09:37.818 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:09:37.819 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=245508 00:09:37.819 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:09:37.819 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.819 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:37.819 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.819 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:09:37.819 "tick_rate": 2500000000, 00:09:37.819 "ticks": 7813118930513568, 00:09:37.819 "name": "Malloc_STAT", 00:09:37.819 "channels": [ 00:09:37.819 { 00:09:37.819 "thread_id": 2, 00:09:37.819 "bytes_read": 516947968, 00:09:37.819 "num_read_ops": 126208, 00:09:37.819 "bytes_written": 0, 00:09:37.819 "num_write_ops": 0, 00:09:37.819 "bytes_unmapped": 0, 00:09:37.819 "num_unmap_ops": 0, 00:09:37.819 "bytes_copied": 0, 00:09:37.819 "num_copy_ops": 0, 00:09:37.819 "read_latency_ticks": 1276609365856, 00:09:37.819 "max_read_latency_ticks": 10782464, 00:09:37.819 "min_read_latency_ticks": 6618992, 00:09:37.819 "write_latency_ticks": 0, 00:09:37.819 "max_write_latency_ticks": 0, 00:09:37.819 "min_write_latency_ticks": 0, 00:09:37.819 "unmap_latency_ticks": 0, 00:09:37.819 "max_unmap_latency_ticks": 0, 00:09:37.819 "min_unmap_latency_ticks": 0, 00:09:37.819 "copy_latency_ticks": 0, 00:09:37.819 "max_copy_latency_ticks": 0, 00:09:37.819 "min_copy_latency_ticks": 0 00:09:37.819 }, 00:09:37.819 { 00:09:37.819 "thread_id": 3, 00:09:37.819 "bytes_read": 522190848, 00:09:37.819 "num_read_ops": 127488, 00:09:37.819 "bytes_written": 0, 00:09:37.819 "num_write_ops": 0, 00:09:37.819 "bytes_unmapped": 0, 00:09:37.819 "num_unmap_ops": 0, 00:09:37.819 "bytes_copied": 0, 00:09:37.819 "num_copy_ops": 0, 00:09:37.819 "read_latency_ticks": 1278232417234, 00:09:37.819 "max_read_latency_ticks": 12259102, 00:09:37.819 "min_read_latency_ticks": 6804288, 00:09:37.819 "write_latency_ticks": 0, 00:09:37.819 "max_write_latency_ticks": 0, 00:09:37.819 "min_write_latency_ticks": 0, 00:09:37.819 "unmap_latency_ticks": 0, 00:09:37.819 "max_unmap_latency_ticks": 0, 00:09:37.819 "min_unmap_latency_ticks": 0, 00:09:37.819 "copy_latency_ticks": 0, 00:09:37.819 "max_copy_latency_ticks": 0, 00:09:37.819 "min_copy_latency_ticks": 0 00:09:37.819 } 00:09:37.819 ] 00:09:37.819 }' 00:09:37.819 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:09:37.819 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=126208 00:09:37.819 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=126208 00:09:37.819 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=127488 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=253696 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:09:38.079 "tick_rate": 2500000000, 00:09:38.079 "ticks": 7813119207690034, 00:09:38.079 "bdevs": [ 00:09:38.079 { 00:09:38.079 "name": "Malloc_STAT", 00:09:38.079 "bytes_read": 1095807488, 00:09:38.079 "num_read_ops": 267524, 00:09:38.079 "bytes_written": 0, 00:09:38.079 "num_write_ops": 0, 00:09:38.079 "bytes_unmapped": 0, 00:09:38.079 "num_unmap_ops": 0, 00:09:38.079 "bytes_copied": 0, 00:09:38.079 "num_copy_ops": 0, 00:09:38.079 "read_latency_ticks": 2695872383450, 00:09:38.079 "max_read_latency_ticks": 12259102, 00:09:38.079 "min_read_latency_ticks": 197304, 00:09:38.079 "write_latency_ticks": 0, 00:09:38.079 "max_write_latency_ticks": 0, 00:09:38.079 "min_write_latency_ticks": 0, 00:09:38.079 "unmap_latency_ticks": 0, 00:09:38.079 "max_unmap_latency_ticks": 0, 00:09:38.079 "min_unmap_latency_ticks": 0, 00:09:38.079 "copy_latency_ticks": 0, 00:09:38.079 "max_copy_latency_ticks": 0, 00:09:38.079 "min_copy_latency_ticks": 0, 00:09:38.079 "io_error": {} 00:09:38.079 } 00:09:38.079 ] 00:09:38.079 }' 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=267524 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 253696 -lt 245508 ']' 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 253696 -gt 267524 ']' 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:38.079 00:09:38.079 Latency(us) 00:09:38.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.079 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:38.079 Malloc_STAT : 2.18 63100.93 246.49 0.00 0.00 4047.98 1350.04 4325.38 00:09:38.079 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:38.079 Malloc_STAT : 2.18 63685.01 248.77 0.00 0.00 4011.09 1232.08 4928.31 00:09:38.079 =================================================================================================================== 00:09:38.079 Total : 126785.94 495.26 0.00 0.00 4029.45 1232.08 4928.31 00:09:38.079 0 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 2147681 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # '[' -z 2147681 ']' 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # kill -0 2147681 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # uname 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2147681 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2147681' 00:09:38.079 killing process with pid 2147681 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@969 -- # kill 2147681 00:09:38.079 Received shutdown signal, test time was about 2.253662 seconds 00:09:38.079 00:09:38.079 Latency(us) 00:09:38.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.079 =================================================================================================================== 00:09:38.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:38.079 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@974 -- # wait 2147681 00:09:38.339 18:12:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:09:38.339 00:09:38.339 real 0m3.421s 00:09:38.339 user 0m6.845s 00:09:38.339 sys 0m0.389s 00:09:38.339 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.339 18:12:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:38.339 ************************************ 00:09:38.339 END TEST bdev_stat 00:09:38.339 ************************************ 00:09:38.339 18:12:46 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:09:38.339 18:12:46 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:09:38.339 18:12:46 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:38.339 18:12:46 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:09:38.339 18:12:46 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:09:38.339 18:12:46 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:38.339 18:12:46 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:09:38.339 18:12:46 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:09:38.339 18:12:46 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:09:38.339 18:12:46 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:09:38.339 00:09:38.339 real 1m46.154s 00:09:38.339 user 7m6.083s 00:09:38.339 sys 0m19.244s 00:09:38.339 18:12:46 blockdev_general -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.339 18:12:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:38.339 ************************************ 00:09:38.339 END TEST blockdev_general 00:09:38.339 ************************************ 00:09:38.339 18:12:46 -- spdk/autotest.sh@194 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:38.339 18:12:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:38.339 18:12:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.339 18:12:46 -- common/autotest_common.sh@10 -- # set +x 00:09:38.339 ************************************ 00:09:38.339 START TEST bdev_raid 00:09:38.339 ************************************ 00:09:38.339 18:12:46 bdev_raid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:38.600 * Looking for test storage... 00:09:38.600 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:38.600 18:12:46 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:38.600 18:12:46 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:38.600 18:12:46 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:09:38.600 18:12:46 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:09:38.600 18:12:46 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:09:38.600 18:12:47 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:09:38.600 18:12:47 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:09:38.600 18:12:47 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:09:38.600 18:12:47 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:09:38.600 18:12:47 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:09:38.600 18:12:47 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:09:38.600 18:12:47 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:38.600 18:12:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.600 18:12:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.600 18:12:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.600 ************************************ 00:09:38.600 START TEST raid_function_test_raid0 00:09:38.600 ************************************ 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=2148314 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 2148314' 00:09:38.600 Process raid pid: 2148314 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 2148314 /var/tmp/spdk-raid.sock 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 2148314 ']' 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:38.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.600 18:12:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:38.600 [2024-07-24 18:12:47.115463] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:09:38.600 [2024-07-24 18:12:47.115506] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:01.0 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:01.1 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:01.2 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:01.3 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:01.4 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:01.5 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:01.6 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:01.7 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:02.0 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:02.1 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:02.2 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:02.3 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:02.4 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:02.5 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:02.6 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b3:02.7 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:01.0 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:01.1 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:01.2 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:01.3 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:01.4 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:01.5 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:01.6 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:01.7 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:02.0 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:02.1 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:02.2 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:02.3 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:02.4 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:02.5 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:02.6 cannot be used 00:09:38.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:38.600 EAL: Requested device 0000:b5:02.7 cannot be used 00:09:38.866 [2024-07-24 18:12:47.209790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.866 [2024-07-24 18:12:47.283648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.866 [2024-07-24 18:12:47.342889] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.866 [2024-07-24 18:12:47.342918] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.438 18:12:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.438 18:12:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:09:39.438 18:12:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:09:39.438 18:12:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:09:39.438 18:12:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:39.438 18:12:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:09:39.438 18:12:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:39.698 [2024-07-24 18:12:48.091199] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:39.698 [2024-07-24 18:12:48.092222] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:39.698 [2024-07-24 18:12:48.092266] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xee2cd0 00:09:39.698 [2024-07-24 18:12:48.092273] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:39.698 [2024-07-24 18:12:48.092401] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd45f80 00:09:39.698 [2024-07-24 18:12:48.092482] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xee2cd0 00:09:39.698 [2024-07-24 18:12:48.092489] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0xee2cd0 00:09:39.698 [2024-07-24 18:12:48.092556] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.698 Base_1 00:09:39.698 Base_2 00:09:39.698 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:39.698 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:39.698 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:09:39.698 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:09:39.698 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:09:39.698 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:39.698 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:39.698 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:39.698 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:39.698 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:39.698 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:39.698 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:39.958 [2024-07-24 18:12:48.448130] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd284f0 00:09:39.958 /dev/nbd0 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:39.958 1+0 records in 00:09:39.958 1+0 records out 00:09:39.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212202 s, 19.3 MB/s 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:39.958 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:40.217 { 00:09:40.217 "nbd_device": "/dev/nbd0", 00:09:40.217 "bdev_name": "raid" 00:09:40.217 } 00:09:40.217 ]' 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:40.217 { 00:09:40.217 "nbd_device": "/dev/nbd0", 00:09:40.217 "bdev_name": "raid" 00:09:40.217 } 00:09:40.217 ]' 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:09:40.217 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:09:40.218 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:09:40.218 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:09:40.218 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:09:40.218 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:09:40.218 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:40.218 4096+0 records in 00:09:40.218 4096+0 records out 00:09:40.218 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0293843 s, 71.4 MB/s 00:09:40.218 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:40.477 4096+0 records in 00:09:40.477 4096+0 records out 00:09:40.477 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.199499 s, 10.5 MB/s 00:09:40.477 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:09:40.477 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:40.477 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:09:40.477 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:40.477 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:09:40.477 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:09:40.477 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:40.477 128+0 records in 00:09:40.477 128+0 records out 00:09:40.477 65536 bytes (66 kB, 64 KiB) copied, 0.000822559 s, 79.7 MB/s 00:09:40.477 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:40.477 18:12:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:40.477 2035+0 records in 00:09:40.477 2035+0 records out 00:09:40.477 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0117669 s, 88.5 MB/s 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:40.477 456+0 records in 00:09:40.477 456+0 records out 00:09:40.477 233472 bytes (233 kB, 228 KiB) copied, 0.00273709 s, 85.3 MB/s 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:40.477 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:40.736 [2024-07-24 18:12:49.243304] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:40.736 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:40.995 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:40.995 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:40.995 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:40.995 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:40.995 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:40.995 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:40.995 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:40.995 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:40.995 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:40.995 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:09:40.996 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:09:40.996 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 2148314 00:09:40.996 18:12:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 2148314 ']' 00:09:40.996 18:12:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 2148314 00:09:40.996 18:12:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:09:40.996 18:12:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.996 18:12:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2148314 00:09:40.996 18:12:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:40.996 18:12:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:40.996 18:12:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2148314' 00:09:40.996 killing process with pid 2148314 00:09:40.996 18:12:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 2148314 00:09:40.996 [2024-07-24 18:12:49.538304] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.996 [2024-07-24 18:12:49.538353] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.996 [2024-07-24 18:12:49.538382] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.996 [2024-07-24 18:12:49.538390] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xee2cd0 name raid, state offline 00:09:40.996 18:12:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 2148314 00:09:40.996 [2024-07-24 18:12:49.553261] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.255 18:12:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:09:41.255 00:09:41.255 real 0m2.658s 00:09:41.255 user 0m3.373s 00:09:41.255 sys 0m1.022s 00:09:41.255 18:12:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.255 18:12:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:41.255 ************************************ 00:09:41.255 END TEST raid_function_test_raid0 00:09:41.255 ************************************ 00:09:41.255 18:12:49 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:09:41.255 18:12:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:41.255 18:12:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.255 18:12:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.255 ************************************ 00:09:41.255 START TEST raid_function_test_concat 00:09:41.255 ************************************ 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=2148924 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 2148924' 00:09:41.255 Process raid pid: 2148924 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 2148924 /var/tmp/spdk-raid.sock 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 2148924 ']' 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:41.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.255 18:12:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:41.516 [2024-07-24 18:12:49.856723] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:09:41.516 [2024-07-24 18:12:49.856766] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:01.0 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:01.1 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:01.2 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:01.3 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:01.4 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:01.5 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:01.6 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:01.7 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:02.0 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:02.1 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:02.2 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:02.3 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:02.4 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:02.5 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:02.6 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b3:02.7 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:01.0 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:01.1 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:01.2 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:01.3 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:01.4 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:01.5 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:01.6 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:01.7 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:02.0 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:02.1 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:02.2 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:02.3 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:02.4 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:02.5 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:02.6 cannot be used 00:09:41.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.516 EAL: Requested device 0000:b5:02.7 cannot be used 00:09:41.516 [2024-07-24 18:12:49.951154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.516 [2024-07-24 18:12:50.028156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.516 [2024-07-24 18:12:50.077815] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.516 [2024-07-24 18:12:50.077839] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.085 18:12:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.085 18:12:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:09:42.085 18:12:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:09:42.085 18:12:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:09:42.085 18:12:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:42.085 18:12:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:09:42.085 18:12:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:42.345 [2024-07-24 18:12:50.837946] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:42.345 [2024-07-24 18:12:50.838955] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:42.345 [2024-07-24 18:12:50.838995] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x13f6cd0 00:09:42.345 [2024-07-24 18:12:50.839002] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:42.345 [2024-07-24 18:12:50.839127] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1259f80 00:09:42.345 [2024-07-24 18:12:50.839203] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13f6cd0 00:09:42.345 [2024-07-24 18:12:50.839210] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x13f6cd0 00:09:42.345 [2024-07-24 18:12:50.839271] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.345 Base_1 00:09:42.345 Base_2 00:09:42.345 18:12:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:42.345 18:12:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:42.345 18:12:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:09:42.605 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:09:42.605 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:09:42.605 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:42.605 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:42.605 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:42.605 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:42.605 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:42.605 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:42.605 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:42.605 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:42.605 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:42.605 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:42.605 [2024-07-24 18:12:51.186858] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x123a7f0 00:09:42.605 /dev/nbd0 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.865 1+0 records in 00:09:42.865 1+0 records out 00:09:42.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275175 s, 14.9 MB/s 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:42.865 { 00:09:42.865 "nbd_device": "/dev/nbd0", 00:09:42.865 "bdev_name": "raid" 00:09:42.865 } 00:09:42.865 ]' 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:42.865 { 00:09:42.865 "nbd_device": "/dev/nbd0", 00:09:42.865 "bdev_name": "raid" 00:09:42.865 } 00:09:42.865 ]' 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:42.865 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:43.124 4096+0 records in 00:09:43.124 4096+0 records out 00:09:43.124 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0295865 s, 70.9 MB/s 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:43.124 4096+0 records in 00:09:43.124 4096+0 records out 00:09:43.124 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.200578 s, 10.5 MB/s 00:09:43.124 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:43.384 128+0 records in 00:09:43.384 128+0 records out 00:09:43.384 65536 bytes (66 kB, 64 KiB) copied, 0.000828158 s, 79.1 MB/s 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:43.384 2035+0 records in 00:09:43.384 2035+0 records out 00:09:43.384 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0106206 s, 98.1 MB/s 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:43.384 456+0 records in 00:09:43.384 456+0 records out 00:09:43.384 233472 bytes (233 kB, 228 KiB) copied, 0.00269995 s, 86.5 MB/s 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.384 18:12:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:43.643 [2024-07-24 18:12:52.010315] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:43.643 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 2148924 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 2148924 ']' 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 2148924 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2148924 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2148924' 00:09:43.903 killing process with pid 2148924 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 2148924 00:09:43.903 [2024-07-24 18:12:52.297423] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:43.903 [2024-07-24 18:12:52.297472] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.903 [2024-07-24 18:12:52.297502] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.903 [2024-07-24 18:12:52.297510] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13f6cd0 name raid, state offline 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 2148924 00:09:43.903 [2024-07-24 18:12:52.312572] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:09:43.903 00:09:43.903 real 0m2.679s 00:09:43.903 user 0m3.396s 00:09:43.903 sys 0m1.044s 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.903 18:12:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:43.903 ************************************ 00:09:43.903 END TEST raid_function_test_concat 00:09:43.903 ************************************ 00:09:44.163 18:12:52 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:09:44.163 18:12:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:44.163 18:12:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.163 18:12:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.163 ************************************ 00:09:44.163 START TEST raid0_resize_test 00:09:44.163 ************************************ 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid0_resize_test 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=2149542 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 2149542' 00:09:44.163 Process raid pid: 2149542 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 2149542 /var/tmp/spdk-raid.sock 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 2149542 ']' 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:44.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.163 18:12:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.163 [2024-07-24 18:12:52.598768] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:09:44.163 [2024-07-24 18:12:52.598811] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:01.0 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:01.1 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:01.2 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:01.3 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:01.4 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:01.5 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:01.6 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:01.7 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:02.0 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:02.1 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:02.2 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:02.3 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:02.4 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:02.5 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:02.6 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b3:02.7 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:01.0 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:01.1 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:01.2 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:01.3 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:01.4 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:01.5 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:01.6 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:01.7 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:02.0 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:02.1 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:02.2 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:02.3 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:02.4 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:02.5 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:02.6 cannot be used 00:09:44.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:44.163 EAL: Requested device 0000:b5:02.7 cannot be used 00:09:44.163 [2024-07-24 18:12:52.692368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.423 [2024-07-24 18:12:52.767254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.423 [2024-07-24 18:12:52.816520] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.423 [2024-07-24 18:12:52.816543] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.989 18:12:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.989 18:12:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:09:44.989 18:12:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:09:44.989 Base_1 00:09:44.989 18:12:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:09:45.248 Base_2 00:09:45.248 18:12:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:09:45.507 [2024-07-24 18:12:53.860327] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:45.507 [2024-07-24 18:12:53.861380] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:45.507 [2024-07-24 18:12:53.861414] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xa09fb0 00:09:45.507 [2024-07-24 18:12:53.861420] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:45.507 [2024-07-24 18:12:53.861564] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8601f0 00:09:45.507 [2024-07-24 18:12:53.861641] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa09fb0 00:09:45.507 [2024-07-24 18:12:53.861648] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0xa09fb0 00:09:45.507 [2024-07-24 18:12:53.861732] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.507 18:12:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:09:45.507 [2024-07-24 18:12:54.028748] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:45.507 [2024-07-24 18:12:54.028759] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:45.507 true 00:09:45.507 18:12:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:45.507 18:12:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:09:45.790 [2024-07-24 18:12:54.193268] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.790 18:12:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:09:45.790 18:12:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:09:45.790 18:12:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:09:45.790 18:12:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:09:45.790 [2024-07-24 18:12:54.365594] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:45.790 [2024-07-24 18:12:54.365606] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:45.790 [2024-07-24 18:12:54.365622] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:46.084 true 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:46.084 [2024-07-24 18:12:54.538146] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 2149542 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 2149542 ']' 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 2149542 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2149542 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2149542' 00:09:46.084 killing process with pid 2149542 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 2149542 00:09:46.084 [2024-07-24 18:12:54.612821] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.084 [2024-07-24 18:12:54.612863] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.084 [2024-07-24 18:12:54.612894] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.084 [2024-07-24 18:12:54.612902] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa09fb0 name Raid, state offline 00:09:46.084 18:12:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 2149542 00:09:46.084 [2024-07-24 18:12:54.613975] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.342 18:12:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:09:46.342 00:09:46.342 real 0m2.224s 00:09:46.342 user 0m3.286s 00:09:46.342 sys 0m0.497s 00:09:46.342 18:12:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.342 18:12:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.342 ************************************ 00:09:46.342 END TEST raid0_resize_test 00:09:46.342 ************************************ 00:09:46.342 18:12:54 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:09:46.342 18:12:54 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:46.342 18:12:54 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:46.342 18:12:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:46.342 18:12:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.342 18:12:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.342 ************************************ 00:09:46.342 START TEST raid_state_function_test 00:09:46.342 ************************************ 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2149858 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2149858' 00:09:46.342 Process raid pid: 2149858 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2149858 /var/tmp/spdk-raid.sock 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2149858 ']' 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:46.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.342 18:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.342 [2024-07-24 18:12:54.917065] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:09:46.342 [2024-07-24 18:12:54.917118] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:01.0 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:01.1 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:01.2 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:01.3 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:01.4 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:01.5 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:01.6 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:01.7 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:02.0 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:02.1 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:02.2 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:02.3 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:02.4 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:02.5 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:02.6 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b3:02.7 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:01.0 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:01.1 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:01.2 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:01.3 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:01.4 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:01.5 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:01.6 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:01.7 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:02.0 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:02.1 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:02.2 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:02.3 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:02.4 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:02.5 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:02.6 cannot be used 00:09:46.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.602 EAL: Requested device 0000:b5:02.7 cannot be used 00:09:46.602 [2024-07-24 18:12:55.015545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.602 [2024-07-24 18:12:55.089928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.602 [2024-07-24 18:12:55.144149] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.602 [2024-07-24 18:12:55.144170] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.170 18:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.170 18:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:47.170 18:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:47.429 [2024-07-24 18:12:55.871713] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.429 [2024-07-24 18:12:55.871743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.429 [2024-07-24 18:12:55.871751] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.429 [2024-07-24 18:12:55.871759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.429 18:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:47.429 18:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:47.429 18:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:47.429 18:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:47.429 18:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:47.429 18:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:47.429 18:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:47.429 18:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:47.429 18:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:47.429 18:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:47.429 18:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.429 18:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.687 18:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:47.687 "name": "Existed_Raid", 00:09:47.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.687 "strip_size_kb": 64, 00:09:47.687 "state": "configuring", 00:09:47.687 "raid_level": "raid0", 00:09:47.687 "superblock": false, 00:09:47.687 "num_base_bdevs": 2, 00:09:47.687 "num_base_bdevs_discovered": 0, 00:09:47.688 "num_base_bdevs_operational": 2, 00:09:47.688 "base_bdevs_list": [ 00:09:47.688 { 00:09:47.688 "name": "BaseBdev1", 00:09:47.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.688 "is_configured": false, 00:09:47.688 "data_offset": 0, 00:09:47.688 "data_size": 0 00:09:47.688 }, 00:09:47.688 { 00:09:47.688 "name": "BaseBdev2", 00:09:47.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.688 "is_configured": false, 00:09:47.688 "data_offset": 0, 00:09:47.688 "data_size": 0 00:09:47.688 } 00:09:47.688 ] 00:09:47.688 }' 00:09:47.688 18:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:47.688 18:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.946 18:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:48.204 [2024-07-24 18:12:56.709796] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.204 [2024-07-24 18:12:56.709813] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x212f1a0 name Existed_Raid, state configuring 00:09:48.204 18:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:48.463 [2024-07-24 18:12:56.886260] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.463 [2024-07-24 18:12:56.886278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.463 [2024-07-24 18:12:56.886284] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.463 [2024-07-24 18:12:56.886292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.463 18:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.722 [2024-07-24 18:12:57.071154] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.722 BaseBdev1 00:09:48.722 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:48.722 18:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:48.722 18:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.722 18:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:48.722 18:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.722 18:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.722 18:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:48.722 18:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.981 [ 00:09:48.981 { 00:09:48.981 "name": "BaseBdev1", 00:09:48.981 "aliases": [ 00:09:48.981 "8d82f438-e8b1-4354-81ba-a2d6dec7a8db" 00:09:48.981 ], 00:09:48.981 "product_name": "Malloc disk", 00:09:48.981 "block_size": 512, 00:09:48.981 "num_blocks": 65536, 00:09:48.981 "uuid": "8d82f438-e8b1-4354-81ba-a2d6dec7a8db", 00:09:48.981 "assigned_rate_limits": { 00:09:48.981 "rw_ios_per_sec": 0, 00:09:48.981 "rw_mbytes_per_sec": 0, 00:09:48.981 "r_mbytes_per_sec": 0, 00:09:48.981 "w_mbytes_per_sec": 0 00:09:48.981 }, 00:09:48.981 "claimed": true, 00:09:48.981 "claim_type": "exclusive_write", 00:09:48.981 "zoned": false, 00:09:48.981 "supported_io_types": { 00:09:48.981 "read": true, 00:09:48.981 "write": true, 00:09:48.981 "unmap": true, 00:09:48.981 "flush": true, 00:09:48.981 "reset": true, 00:09:48.981 "nvme_admin": false, 00:09:48.981 "nvme_io": false, 00:09:48.981 "nvme_io_md": false, 00:09:48.981 "write_zeroes": true, 00:09:48.981 "zcopy": true, 00:09:48.981 "get_zone_info": false, 00:09:48.981 "zone_management": false, 00:09:48.981 "zone_append": false, 00:09:48.981 "compare": false, 00:09:48.981 "compare_and_write": false, 00:09:48.981 "abort": true, 00:09:48.981 "seek_hole": false, 00:09:48.981 "seek_data": false, 00:09:48.981 "copy": true, 00:09:48.981 "nvme_iov_md": false 00:09:48.981 }, 00:09:48.981 "memory_domains": [ 00:09:48.981 { 00:09:48.981 "dma_device_id": "system", 00:09:48.981 "dma_device_type": 1 00:09:48.981 }, 00:09:48.981 { 00:09:48.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.981 "dma_device_type": 2 00:09:48.981 } 00:09:48.981 ], 00:09:48.981 "driver_specific": {} 00:09:48.981 } 00:09:48.981 ] 00:09:48.981 18:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:48.981 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:48.981 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:48.981 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:48.981 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:48.981 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:48.981 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:48.981 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:48.981 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:48.981 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:48.981 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:48.981 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.981 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.240 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:49.240 "name": "Existed_Raid", 00:09:49.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.240 "strip_size_kb": 64, 00:09:49.240 "state": "configuring", 00:09:49.240 "raid_level": "raid0", 00:09:49.240 "superblock": false, 00:09:49.240 "num_base_bdevs": 2, 00:09:49.240 "num_base_bdevs_discovered": 1, 00:09:49.240 "num_base_bdevs_operational": 2, 00:09:49.240 "base_bdevs_list": [ 00:09:49.240 { 00:09:49.240 "name": "BaseBdev1", 00:09:49.240 "uuid": "8d82f438-e8b1-4354-81ba-a2d6dec7a8db", 00:09:49.240 "is_configured": true, 00:09:49.240 "data_offset": 0, 00:09:49.240 "data_size": 65536 00:09:49.240 }, 00:09:49.240 { 00:09:49.240 "name": "BaseBdev2", 00:09:49.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.240 "is_configured": false, 00:09:49.240 "data_offset": 0, 00:09:49.240 "data_size": 0 00:09:49.240 } 00:09:49.240 ] 00:09:49.240 }' 00:09:49.240 18:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:49.240 18:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.498 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:49.757 [2024-07-24 18:12:58.222112] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:49.757 [2024-07-24 18:12:58.222140] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x212ea90 name Existed_Raid, state configuring 00:09:49.757 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:50.016 [2024-07-24 18:12:58.386557] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.016 [2024-07-24 18:12:58.387662] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.016 [2024-07-24 18:12:58.387687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:50.016 "name": "Existed_Raid", 00:09:50.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.016 "strip_size_kb": 64, 00:09:50.016 "state": "configuring", 00:09:50.016 "raid_level": "raid0", 00:09:50.016 "superblock": false, 00:09:50.016 "num_base_bdevs": 2, 00:09:50.016 "num_base_bdevs_discovered": 1, 00:09:50.016 "num_base_bdevs_operational": 2, 00:09:50.016 "base_bdevs_list": [ 00:09:50.016 { 00:09:50.016 "name": "BaseBdev1", 00:09:50.016 "uuid": "8d82f438-e8b1-4354-81ba-a2d6dec7a8db", 00:09:50.016 "is_configured": true, 00:09:50.016 "data_offset": 0, 00:09:50.016 "data_size": 65536 00:09:50.016 }, 00:09:50.016 { 00:09:50.016 "name": "BaseBdev2", 00:09:50.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.016 "is_configured": false, 00:09:50.016 "data_offset": 0, 00:09:50.016 "data_size": 0 00:09:50.016 } 00:09:50.016 ] 00:09:50.016 }' 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:50.016 18:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.583 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:50.842 [2024-07-24 18:12:59.211348] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.842 [2024-07-24 18:12:59.211373] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x212f880 00:09:50.842 [2024-07-24 18:12:59.211378] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:50.842 [2024-07-24 18:12:59.211514] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22e29a0 00:09:50.842 [2024-07-24 18:12:59.211594] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x212f880 00:09:50.842 [2024-07-24 18:12:59.211601] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x212f880 00:09:50.842 [2024-07-24 18:12:59.211729] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.842 BaseBdev2 00:09:50.842 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:50.842 18:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:50.842 18:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.842 18:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:50.842 18:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.842 18:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.842 18:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:50.842 18:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.101 [ 00:09:51.101 { 00:09:51.101 "name": "BaseBdev2", 00:09:51.101 "aliases": [ 00:09:51.101 "fe5b5ebb-297a-42da-98be-de8955546fbf" 00:09:51.101 ], 00:09:51.101 "product_name": "Malloc disk", 00:09:51.101 "block_size": 512, 00:09:51.101 "num_blocks": 65536, 00:09:51.101 "uuid": "fe5b5ebb-297a-42da-98be-de8955546fbf", 00:09:51.101 "assigned_rate_limits": { 00:09:51.101 "rw_ios_per_sec": 0, 00:09:51.101 "rw_mbytes_per_sec": 0, 00:09:51.101 "r_mbytes_per_sec": 0, 00:09:51.101 "w_mbytes_per_sec": 0 00:09:51.101 }, 00:09:51.101 "claimed": true, 00:09:51.101 "claim_type": "exclusive_write", 00:09:51.101 "zoned": false, 00:09:51.101 "supported_io_types": { 00:09:51.101 "read": true, 00:09:51.101 "write": true, 00:09:51.101 "unmap": true, 00:09:51.101 "flush": true, 00:09:51.101 "reset": true, 00:09:51.101 "nvme_admin": false, 00:09:51.101 "nvme_io": false, 00:09:51.101 "nvme_io_md": false, 00:09:51.101 "write_zeroes": true, 00:09:51.101 "zcopy": true, 00:09:51.101 "get_zone_info": false, 00:09:51.101 "zone_management": false, 00:09:51.101 "zone_append": false, 00:09:51.101 "compare": false, 00:09:51.101 "compare_and_write": false, 00:09:51.101 "abort": true, 00:09:51.101 "seek_hole": false, 00:09:51.101 "seek_data": false, 00:09:51.101 "copy": true, 00:09:51.101 "nvme_iov_md": false 00:09:51.101 }, 00:09:51.101 "memory_domains": [ 00:09:51.101 { 00:09:51.101 "dma_device_id": "system", 00:09:51.101 "dma_device_type": 1 00:09:51.101 }, 00:09:51.101 { 00:09:51.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.101 "dma_device_type": 2 00:09:51.101 } 00:09:51.101 ], 00:09:51.101 "driver_specific": {} 00:09:51.101 } 00:09:51.101 ] 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.101 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.360 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:51.360 "name": "Existed_Raid", 00:09:51.360 "uuid": "ae16ff68-2ace-4fd8-984e-16c767fd552c", 00:09:51.360 "strip_size_kb": 64, 00:09:51.360 "state": "online", 00:09:51.360 "raid_level": "raid0", 00:09:51.360 "superblock": false, 00:09:51.360 "num_base_bdevs": 2, 00:09:51.360 "num_base_bdevs_discovered": 2, 00:09:51.360 "num_base_bdevs_operational": 2, 00:09:51.360 "base_bdevs_list": [ 00:09:51.360 { 00:09:51.360 "name": "BaseBdev1", 00:09:51.360 "uuid": "8d82f438-e8b1-4354-81ba-a2d6dec7a8db", 00:09:51.360 "is_configured": true, 00:09:51.360 "data_offset": 0, 00:09:51.360 "data_size": 65536 00:09:51.360 }, 00:09:51.360 { 00:09:51.360 "name": "BaseBdev2", 00:09:51.360 "uuid": "fe5b5ebb-297a-42da-98be-de8955546fbf", 00:09:51.360 "is_configured": true, 00:09:51.360 "data_offset": 0, 00:09:51.360 "data_size": 65536 00:09:51.360 } 00:09:51.360 ] 00:09:51.360 }' 00:09:51.360 18:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:51.360 18:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.620 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:51.620 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:51.620 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:51.620 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:51.620 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:51.620 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:51.620 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:51.620 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:51.880 [2024-07-24 18:13:00.362500] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.880 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:51.880 "name": "Existed_Raid", 00:09:51.880 "aliases": [ 00:09:51.880 "ae16ff68-2ace-4fd8-984e-16c767fd552c" 00:09:51.880 ], 00:09:51.880 "product_name": "Raid Volume", 00:09:51.880 "block_size": 512, 00:09:51.880 "num_blocks": 131072, 00:09:51.880 "uuid": "ae16ff68-2ace-4fd8-984e-16c767fd552c", 00:09:51.880 "assigned_rate_limits": { 00:09:51.880 "rw_ios_per_sec": 0, 00:09:51.880 "rw_mbytes_per_sec": 0, 00:09:51.880 "r_mbytes_per_sec": 0, 00:09:51.880 "w_mbytes_per_sec": 0 00:09:51.880 }, 00:09:51.880 "claimed": false, 00:09:51.880 "zoned": false, 00:09:51.880 "supported_io_types": { 00:09:51.880 "read": true, 00:09:51.880 "write": true, 00:09:51.880 "unmap": true, 00:09:51.880 "flush": true, 00:09:51.880 "reset": true, 00:09:51.880 "nvme_admin": false, 00:09:51.880 "nvme_io": false, 00:09:51.880 "nvme_io_md": false, 00:09:51.880 "write_zeroes": true, 00:09:51.880 "zcopy": false, 00:09:51.880 "get_zone_info": false, 00:09:51.880 "zone_management": false, 00:09:51.880 "zone_append": false, 00:09:51.880 "compare": false, 00:09:51.880 "compare_and_write": false, 00:09:51.880 "abort": false, 00:09:51.880 "seek_hole": false, 00:09:51.880 "seek_data": false, 00:09:51.880 "copy": false, 00:09:51.880 "nvme_iov_md": false 00:09:51.880 }, 00:09:51.880 "memory_domains": [ 00:09:51.880 { 00:09:51.880 "dma_device_id": "system", 00:09:51.880 "dma_device_type": 1 00:09:51.880 }, 00:09:51.880 { 00:09:51.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.880 "dma_device_type": 2 00:09:51.880 }, 00:09:51.880 { 00:09:51.880 "dma_device_id": "system", 00:09:51.880 "dma_device_type": 1 00:09:51.880 }, 00:09:51.880 { 00:09:51.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.880 "dma_device_type": 2 00:09:51.880 } 00:09:51.880 ], 00:09:51.880 "driver_specific": { 00:09:51.880 "raid": { 00:09:51.880 "uuid": "ae16ff68-2ace-4fd8-984e-16c767fd552c", 00:09:51.880 "strip_size_kb": 64, 00:09:51.880 "state": "online", 00:09:51.880 "raid_level": "raid0", 00:09:51.880 "superblock": false, 00:09:51.880 "num_base_bdevs": 2, 00:09:51.880 "num_base_bdevs_discovered": 2, 00:09:51.880 "num_base_bdevs_operational": 2, 00:09:51.880 "base_bdevs_list": [ 00:09:51.880 { 00:09:51.880 "name": "BaseBdev1", 00:09:51.880 "uuid": "8d82f438-e8b1-4354-81ba-a2d6dec7a8db", 00:09:51.880 "is_configured": true, 00:09:51.880 "data_offset": 0, 00:09:51.880 "data_size": 65536 00:09:51.880 }, 00:09:51.880 { 00:09:51.880 "name": "BaseBdev2", 00:09:51.880 "uuid": "fe5b5ebb-297a-42da-98be-de8955546fbf", 00:09:51.880 "is_configured": true, 00:09:51.880 "data_offset": 0, 00:09:51.880 "data_size": 65536 00:09:51.880 } 00:09:51.880 ] 00:09:51.880 } 00:09:51.880 } 00:09:51.880 }' 00:09:51.880 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.880 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:51.880 BaseBdev2' 00:09:51.880 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:51.880 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:51.880 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:52.139 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:52.139 "name": "BaseBdev1", 00:09:52.139 "aliases": [ 00:09:52.139 "8d82f438-e8b1-4354-81ba-a2d6dec7a8db" 00:09:52.139 ], 00:09:52.140 "product_name": "Malloc disk", 00:09:52.140 "block_size": 512, 00:09:52.140 "num_blocks": 65536, 00:09:52.140 "uuid": "8d82f438-e8b1-4354-81ba-a2d6dec7a8db", 00:09:52.140 "assigned_rate_limits": { 00:09:52.140 "rw_ios_per_sec": 0, 00:09:52.140 "rw_mbytes_per_sec": 0, 00:09:52.140 "r_mbytes_per_sec": 0, 00:09:52.140 "w_mbytes_per_sec": 0 00:09:52.140 }, 00:09:52.140 "claimed": true, 00:09:52.140 "claim_type": "exclusive_write", 00:09:52.140 "zoned": false, 00:09:52.140 "supported_io_types": { 00:09:52.140 "read": true, 00:09:52.140 "write": true, 00:09:52.140 "unmap": true, 00:09:52.140 "flush": true, 00:09:52.140 "reset": true, 00:09:52.140 "nvme_admin": false, 00:09:52.140 "nvme_io": false, 00:09:52.140 "nvme_io_md": false, 00:09:52.140 "write_zeroes": true, 00:09:52.140 "zcopy": true, 00:09:52.140 "get_zone_info": false, 00:09:52.140 "zone_management": false, 00:09:52.140 "zone_append": false, 00:09:52.140 "compare": false, 00:09:52.140 "compare_and_write": false, 00:09:52.140 "abort": true, 00:09:52.140 "seek_hole": false, 00:09:52.140 "seek_data": false, 00:09:52.140 "copy": true, 00:09:52.140 "nvme_iov_md": false 00:09:52.140 }, 00:09:52.140 "memory_domains": [ 00:09:52.140 { 00:09:52.140 "dma_device_id": "system", 00:09:52.140 "dma_device_type": 1 00:09:52.140 }, 00:09:52.140 { 00:09:52.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.140 "dma_device_type": 2 00:09:52.140 } 00:09:52.140 ], 00:09:52.140 "driver_specific": {} 00:09:52.140 }' 00:09:52.140 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.140 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.140 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:52.140 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.140 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.399 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:52.399 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:52.399 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:52.399 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:52.399 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:52.399 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:52.399 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:52.399 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:52.399 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:52.399 18:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:52.658 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:52.658 "name": "BaseBdev2", 00:09:52.658 "aliases": [ 00:09:52.658 "fe5b5ebb-297a-42da-98be-de8955546fbf" 00:09:52.658 ], 00:09:52.658 "product_name": "Malloc disk", 00:09:52.658 "block_size": 512, 00:09:52.658 "num_blocks": 65536, 00:09:52.658 "uuid": "fe5b5ebb-297a-42da-98be-de8955546fbf", 00:09:52.658 "assigned_rate_limits": { 00:09:52.658 "rw_ios_per_sec": 0, 00:09:52.658 "rw_mbytes_per_sec": 0, 00:09:52.658 "r_mbytes_per_sec": 0, 00:09:52.658 "w_mbytes_per_sec": 0 00:09:52.658 }, 00:09:52.658 "claimed": true, 00:09:52.658 "claim_type": "exclusive_write", 00:09:52.658 "zoned": false, 00:09:52.658 "supported_io_types": { 00:09:52.658 "read": true, 00:09:52.658 "write": true, 00:09:52.658 "unmap": true, 00:09:52.658 "flush": true, 00:09:52.658 "reset": true, 00:09:52.658 "nvme_admin": false, 00:09:52.658 "nvme_io": false, 00:09:52.658 "nvme_io_md": false, 00:09:52.658 "write_zeroes": true, 00:09:52.658 "zcopy": true, 00:09:52.658 "get_zone_info": false, 00:09:52.658 "zone_management": false, 00:09:52.658 "zone_append": false, 00:09:52.658 "compare": false, 00:09:52.658 "compare_and_write": false, 00:09:52.658 "abort": true, 00:09:52.658 "seek_hole": false, 00:09:52.658 "seek_data": false, 00:09:52.658 "copy": true, 00:09:52.658 "nvme_iov_md": false 00:09:52.658 }, 00:09:52.658 "memory_domains": [ 00:09:52.658 { 00:09:52.658 "dma_device_id": "system", 00:09:52.658 "dma_device_type": 1 00:09:52.658 }, 00:09:52.658 { 00:09:52.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.658 "dma_device_type": 2 00:09:52.658 } 00:09:52.658 ], 00:09:52.658 "driver_specific": {} 00:09:52.658 }' 00:09:52.659 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.659 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.659 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:52.659 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.659 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.659 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:52.659 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:52.918 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:52.918 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:52.918 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:52.918 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:52.918 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:52.918 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:52.918 [2024-07-24 18:13:01.505327] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:52.918 [2024-07-24 18:13:01.505348] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.918 [2024-07-24 18:13:01.505376] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:53.178 "name": "Existed_Raid", 00:09:53.178 "uuid": "ae16ff68-2ace-4fd8-984e-16c767fd552c", 00:09:53.178 "strip_size_kb": 64, 00:09:53.178 "state": "offline", 00:09:53.178 "raid_level": "raid0", 00:09:53.178 "superblock": false, 00:09:53.178 "num_base_bdevs": 2, 00:09:53.178 "num_base_bdevs_discovered": 1, 00:09:53.178 "num_base_bdevs_operational": 1, 00:09:53.178 "base_bdevs_list": [ 00:09:53.178 { 00:09:53.178 "name": null, 00:09:53.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.178 "is_configured": false, 00:09:53.178 "data_offset": 0, 00:09:53.178 "data_size": 65536 00:09:53.178 }, 00:09:53.178 { 00:09:53.178 "name": "BaseBdev2", 00:09:53.178 "uuid": "fe5b5ebb-297a-42da-98be-de8955546fbf", 00:09:53.178 "is_configured": true, 00:09:53.178 "data_offset": 0, 00:09:53.178 "data_size": 65536 00:09:53.178 } 00:09:53.178 ] 00:09:53.178 }' 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:53.178 18:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.746 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:53.746 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:53.746 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.746 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:54.006 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:54.006 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.006 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:54.006 [2024-07-24 18:13:02.512793] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.006 [2024-07-24 18:13:02.512829] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x212f880 name Existed_Raid, state offline 00:09:54.006 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:54.006 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:54.006 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.006 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2149858 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2149858 ']' 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2149858 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2149858 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2149858' 00:09:54.266 killing process with pid 2149858 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2149858 00:09:54.266 [2024-07-24 18:13:02.766586] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.266 18:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2149858 00:09:54.266 [2024-07-24 18:13:02.767369] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.526 18:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:54.526 00:09:54.526 real 0m8.082s 00:09:54.526 user 0m14.159s 00:09:54.526 sys 0m1.624s 00:09:54.526 18:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.526 18:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.526 ************************************ 00:09:54.526 END TEST raid_state_function_test 00:09:54.526 ************************************ 00:09:54.526 18:13:02 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:54.526 18:13:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:54.526 18:13:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.526 18:13:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.526 ************************************ 00:09:54.526 START TEST raid_state_function_test_sb 00:09:54.526 ************************************ 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2151678 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2151678' 00:09:54.526 Process raid pid: 2151678 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2151678 /var/tmp/spdk-raid.sock 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2151678 ']' 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:54.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.526 18:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.526 [2024-07-24 18:13:03.079840] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:09:54.526 [2024-07-24 18:13:03.079882] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:01.0 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:01.1 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:01.2 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:01.3 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:01.4 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:01.5 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:01.6 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:01.7 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:02.0 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:02.1 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:02.2 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:02.3 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:02.4 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:02.5 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:02.6 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b3:02.7 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:01.0 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:01.1 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:01.2 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:01.3 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:01.4 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:01.5 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:01.6 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:01.7 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:02.0 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:02.1 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:02.2 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:02.3 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:02.4 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:02.5 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:02.6 cannot be used 00:09:54.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:54.786 EAL: Requested device 0000:b5:02.7 cannot be used 00:09:54.786 [2024-07-24 18:13:03.170598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.786 [2024-07-24 18:13:03.242848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.786 [2024-07-24 18:13:03.295762] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.786 [2024-07-24 18:13:03.295803] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.352 18:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.352 18:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:55.352 18:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:55.611 [2024-07-24 18:13:04.042834] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.611 [2024-07-24 18:13:04.042867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.611 [2024-07-24 18:13:04.042875] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.611 [2024-07-24 18:13:04.042883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.611 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:55.611 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:55.611 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:55.611 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:55.611 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:55.611 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:55.611 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:55.611 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:55.611 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:55.611 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:55.611 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:55.611 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.870 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:55.870 "name": "Existed_Raid", 00:09:55.870 "uuid": "24217612-0678-43a3-bee5-d63a004718c7", 00:09:55.870 "strip_size_kb": 64, 00:09:55.870 "state": "configuring", 00:09:55.870 "raid_level": "raid0", 00:09:55.870 "superblock": true, 00:09:55.870 "num_base_bdevs": 2, 00:09:55.870 "num_base_bdevs_discovered": 0, 00:09:55.870 "num_base_bdevs_operational": 2, 00:09:55.870 "base_bdevs_list": [ 00:09:55.870 { 00:09:55.870 "name": "BaseBdev1", 00:09:55.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.870 "is_configured": false, 00:09:55.870 "data_offset": 0, 00:09:55.870 "data_size": 0 00:09:55.870 }, 00:09:55.870 { 00:09:55.870 "name": "BaseBdev2", 00:09:55.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.870 "is_configured": false, 00:09:55.870 "data_offset": 0, 00:09:55.870 "data_size": 0 00:09:55.870 } 00:09:55.870 ] 00:09:55.870 }' 00:09:55.870 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:55.870 18:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.438 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:56.438 [2024-07-24 18:13:04.884892] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.438 [2024-07-24 18:13:04.884911] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19631a0 name Existed_Raid, state configuring 00:09:56.438 18:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:56.697 [2024-07-24 18:13:05.053347] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.697 [2024-07-24 18:13:05.053367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.697 [2024-07-24 18:13:05.053373] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.697 [2024-07-24 18:13:05.053380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.697 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:56.697 [2024-07-24 18:13:05.246201] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.697 BaseBdev1 00:09:56.697 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:56.697 18:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:56.697 18:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.697 18:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:56.697 18:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.697 18:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.697 18:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:56.957 18:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:57.217 [ 00:09:57.217 { 00:09:57.217 "name": "BaseBdev1", 00:09:57.217 "aliases": [ 00:09:57.217 "799b1bf6-20cc-47a8-a5bd-dfefaa489bb5" 00:09:57.217 ], 00:09:57.217 "product_name": "Malloc disk", 00:09:57.217 "block_size": 512, 00:09:57.217 "num_blocks": 65536, 00:09:57.217 "uuid": "799b1bf6-20cc-47a8-a5bd-dfefaa489bb5", 00:09:57.217 "assigned_rate_limits": { 00:09:57.217 "rw_ios_per_sec": 0, 00:09:57.217 "rw_mbytes_per_sec": 0, 00:09:57.217 "r_mbytes_per_sec": 0, 00:09:57.217 "w_mbytes_per_sec": 0 00:09:57.217 }, 00:09:57.217 "claimed": true, 00:09:57.217 "claim_type": "exclusive_write", 00:09:57.217 "zoned": false, 00:09:57.217 "supported_io_types": { 00:09:57.217 "read": true, 00:09:57.217 "write": true, 00:09:57.217 "unmap": true, 00:09:57.217 "flush": true, 00:09:57.217 "reset": true, 00:09:57.217 "nvme_admin": false, 00:09:57.217 "nvme_io": false, 00:09:57.217 "nvme_io_md": false, 00:09:57.217 "write_zeroes": true, 00:09:57.217 "zcopy": true, 00:09:57.217 "get_zone_info": false, 00:09:57.217 "zone_management": false, 00:09:57.217 "zone_append": false, 00:09:57.217 "compare": false, 00:09:57.217 "compare_and_write": false, 00:09:57.217 "abort": true, 00:09:57.217 "seek_hole": false, 00:09:57.217 "seek_data": false, 00:09:57.217 "copy": true, 00:09:57.217 "nvme_iov_md": false 00:09:57.217 }, 00:09:57.217 "memory_domains": [ 00:09:57.217 { 00:09:57.217 "dma_device_id": "system", 00:09:57.217 "dma_device_type": 1 00:09:57.217 }, 00:09:57.217 { 00:09:57.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.217 "dma_device_type": 2 00:09:57.217 } 00:09:57.217 ], 00:09:57.217 "driver_specific": {} 00:09:57.217 } 00:09:57.217 ] 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:57.217 "name": "Existed_Raid", 00:09:57.217 "uuid": "ff042406-6b57-4c77-9314-97fe350a0404", 00:09:57.217 "strip_size_kb": 64, 00:09:57.217 "state": "configuring", 00:09:57.217 "raid_level": "raid0", 00:09:57.217 "superblock": true, 00:09:57.217 "num_base_bdevs": 2, 00:09:57.217 "num_base_bdevs_discovered": 1, 00:09:57.217 "num_base_bdevs_operational": 2, 00:09:57.217 "base_bdevs_list": [ 00:09:57.217 { 00:09:57.217 "name": "BaseBdev1", 00:09:57.217 "uuid": "799b1bf6-20cc-47a8-a5bd-dfefaa489bb5", 00:09:57.217 "is_configured": true, 00:09:57.217 "data_offset": 2048, 00:09:57.217 "data_size": 63488 00:09:57.217 }, 00:09:57.217 { 00:09:57.217 "name": "BaseBdev2", 00:09:57.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.217 "is_configured": false, 00:09:57.217 "data_offset": 0, 00:09:57.217 "data_size": 0 00:09:57.217 } 00:09:57.217 ] 00:09:57.217 }' 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:57.217 18:13:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.786 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:58.046 [2024-07-24 18:13:06.429264] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.046 [2024-07-24 18:13:06.429295] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1962a90 name Existed_Raid, state configuring 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:58.046 [2024-07-24 18:13:06.597737] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.046 [2024-07-24 18:13:06.598768] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.046 [2024-07-24 18:13:06.598794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.046 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.305 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:58.305 "name": "Existed_Raid", 00:09:58.305 "uuid": "3bf83b1d-35b0-4809-b73f-4cf3f6dd2c78", 00:09:58.305 "strip_size_kb": 64, 00:09:58.305 "state": "configuring", 00:09:58.305 "raid_level": "raid0", 00:09:58.305 "superblock": true, 00:09:58.305 "num_base_bdevs": 2, 00:09:58.305 "num_base_bdevs_discovered": 1, 00:09:58.305 "num_base_bdevs_operational": 2, 00:09:58.305 "base_bdevs_list": [ 00:09:58.305 { 00:09:58.305 "name": "BaseBdev1", 00:09:58.305 "uuid": "799b1bf6-20cc-47a8-a5bd-dfefaa489bb5", 00:09:58.305 "is_configured": true, 00:09:58.305 "data_offset": 2048, 00:09:58.305 "data_size": 63488 00:09:58.305 }, 00:09:58.305 { 00:09:58.305 "name": "BaseBdev2", 00:09:58.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.305 "is_configured": false, 00:09:58.305 "data_offset": 0, 00:09:58.305 "data_size": 0 00:09:58.305 } 00:09:58.305 ] 00:09:58.305 }' 00:09:58.305 18:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:58.305 18:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.872 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.872 [2024-07-24 18:13:07.442658] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.872 [2024-07-24 18:13:07.442785] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1963880 00:09:58.872 [2024-07-24 18:13:07.442795] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:58.872 [2024-07-24 18:13:07.442917] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b169a0 00:09:58.872 [2024-07-24 18:13:07.442998] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1963880 00:09:58.872 [2024-07-24 18:13:07.443004] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1963880 00:09:58.872 [2024-07-24 18:13:07.443070] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.872 BaseBdev2 00:09:58.872 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:58.872 18:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:58.872 18:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.873 18:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:58.873 18:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.873 18:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.873 18:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:59.132 18:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.391 [ 00:09:59.391 { 00:09:59.391 "name": "BaseBdev2", 00:09:59.391 "aliases": [ 00:09:59.391 "e3474905-b134-4c88-bdba-2a3ba9839ab3" 00:09:59.391 ], 00:09:59.391 "product_name": "Malloc disk", 00:09:59.391 "block_size": 512, 00:09:59.391 "num_blocks": 65536, 00:09:59.391 "uuid": "e3474905-b134-4c88-bdba-2a3ba9839ab3", 00:09:59.391 "assigned_rate_limits": { 00:09:59.391 "rw_ios_per_sec": 0, 00:09:59.391 "rw_mbytes_per_sec": 0, 00:09:59.391 "r_mbytes_per_sec": 0, 00:09:59.391 "w_mbytes_per_sec": 0 00:09:59.391 }, 00:09:59.391 "claimed": true, 00:09:59.391 "claim_type": "exclusive_write", 00:09:59.391 "zoned": false, 00:09:59.391 "supported_io_types": { 00:09:59.391 "read": true, 00:09:59.391 "write": true, 00:09:59.391 "unmap": true, 00:09:59.391 "flush": true, 00:09:59.391 "reset": true, 00:09:59.391 "nvme_admin": false, 00:09:59.391 "nvme_io": false, 00:09:59.391 "nvme_io_md": false, 00:09:59.391 "write_zeroes": true, 00:09:59.391 "zcopy": true, 00:09:59.391 "get_zone_info": false, 00:09:59.391 "zone_management": false, 00:09:59.391 "zone_append": false, 00:09:59.391 "compare": false, 00:09:59.391 "compare_and_write": false, 00:09:59.391 "abort": true, 00:09:59.391 "seek_hole": false, 00:09:59.391 "seek_data": false, 00:09:59.391 "copy": true, 00:09:59.391 "nvme_iov_md": false 00:09:59.391 }, 00:09:59.391 "memory_domains": [ 00:09:59.391 { 00:09:59.391 "dma_device_id": "system", 00:09:59.391 "dma_device_type": 1 00:09:59.391 }, 00:09:59.391 { 00:09:59.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.391 "dma_device_type": 2 00:09:59.391 } 00:09:59.391 ], 00:09:59.391 "driver_specific": {} 00:09:59.391 } 00:09:59.391 ] 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.391 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:59.391 "name": "Existed_Raid", 00:09:59.391 "uuid": "3bf83b1d-35b0-4809-b73f-4cf3f6dd2c78", 00:09:59.391 "strip_size_kb": 64, 00:09:59.391 "state": "online", 00:09:59.391 "raid_level": "raid0", 00:09:59.391 "superblock": true, 00:09:59.391 "num_base_bdevs": 2, 00:09:59.391 "num_base_bdevs_discovered": 2, 00:09:59.391 "num_base_bdevs_operational": 2, 00:09:59.391 "base_bdevs_list": [ 00:09:59.391 { 00:09:59.391 "name": "BaseBdev1", 00:09:59.391 "uuid": "799b1bf6-20cc-47a8-a5bd-dfefaa489bb5", 00:09:59.391 "is_configured": true, 00:09:59.391 "data_offset": 2048, 00:09:59.391 "data_size": 63488 00:09:59.391 }, 00:09:59.391 { 00:09:59.391 "name": "BaseBdev2", 00:09:59.391 "uuid": "e3474905-b134-4c88-bdba-2a3ba9839ab3", 00:09:59.391 "is_configured": true, 00:09:59.391 "data_offset": 2048, 00:09:59.391 "data_size": 63488 00:09:59.391 } 00:09:59.391 ] 00:09:59.391 }' 00:09:59.392 18:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:59.392 18:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.010 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:00.010 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:00.010 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:00.010 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:00.010 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:00.011 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:00.011 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:00.011 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:00.289 [2024-07-24 18:13:08.605837] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.289 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:00.289 "name": "Existed_Raid", 00:10:00.289 "aliases": [ 00:10:00.289 "3bf83b1d-35b0-4809-b73f-4cf3f6dd2c78" 00:10:00.289 ], 00:10:00.289 "product_name": "Raid Volume", 00:10:00.289 "block_size": 512, 00:10:00.289 "num_blocks": 126976, 00:10:00.289 "uuid": "3bf83b1d-35b0-4809-b73f-4cf3f6dd2c78", 00:10:00.289 "assigned_rate_limits": { 00:10:00.289 "rw_ios_per_sec": 0, 00:10:00.289 "rw_mbytes_per_sec": 0, 00:10:00.289 "r_mbytes_per_sec": 0, 00:10:00.289 "w_mbytes_per_sec": 0 00:10:00.289 }, 00:10:00.289 "claimed": false, 00:10:00.289 "zoned": false, 00:10:00.289 "supported_io_types": { 00:10:00.289 "read": true, 00:10:00.289 "write": true, 00:10:00.289 "unmap": true, 00:10:00.289 "flush": true, 00:10:00.289 "reset": true, 00:10:00.289 "nvme_admin": false, 00:10:00.289 "nvme_io": false, 00:10:00.289 "nvme_io_md": false, 00:10:00.289 "write_zeroes": true, 00:10:00.289 "zcopy": false, 00:10:00.289 "get_zone_info": false, 00:10:00.289 "zone_management": false, 00:10:00.290 "zone_append": false, 00:10:00.290 "compare": false, 00:10:00.290 "compare_and_write": false, 00:10:00.290 "abort": false, 00:10:00.290 "seek_hole": false, 00:10:00.290 "seek_data": false, 00:10:00.290 "copy": false, 00:10:00.290 "nvme_iov_md": false 00:10:00.290 }, 00:10:00.290 "memory_domains": [ 00:10:00.290 { 00:10:00.290 "dma_device_id": "system", 00:10:00.290 "dma_device_type": 1 00:10:00.290 }, 00:10:00.290 { 00:10:00.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.290 "dma_device_type": 2 00:10:00.290 }, 00:10:00.290 { 00:10:00.290 "dma_device_id": "system", 00:10:00.290 "dma_device_type": 1 00:10:00.290 }, 00:10:00.290 { 00:10:00.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.290 "dma_device_type": 2 00:10:00.290 } 00:10:00.290 ], 00:10:00.290 "driver_specific": { 00:10:00.290 "raid": { 00:10:00.290 "uuid": "3bf83b1d-35b0-4809-b73f-4cf3f6dd2c78", 00:10:00.290 "strip_size_kb": 64, 00:10:00.290 "state": "online", 00:10:00.290 "raid_level": "raid0", 00:10:00.290 "superblock": true, 00:10:00.290 "num_base_bdevs": 2, 00:10:00.290 "num_base_bdevs_discovered": 2, 00:10:00.290 "num_base_bdevs_operational": 2, 00:10:00.290 "base_bdevs_list": [ 00:10:00.290 { 00:10:00.290 "name": "BaseBdev1", 00:10:00.290 "uuid": "799b1bf6-20cc-47a8-a5bd-dfefaa489bb5", 00:10:00.290 "is_configured": true, 00:10:00.290 "data_offset": 2048, 00:10:00.290 "data_size": 63488 00:10:00.290 }, 00:10:00.290 { 00:10:00.290 "name": "BaseBdev2", 00:10:00.290 "uuid": "e3474905-b134-4c88-bdba-2a3ba9839ab3", 00:10:00.290 "is_configured": true, 00:10:00.290 "data_offset": 2048, 00:10:00.290 "data_size": 63488 00:10:00.290 } 00:10:00.290 ] 00:10:00.290 } 00:10:00.290 } 00:10:00.290 }' 00:10:00.290 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:00.290 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:00.290 BaseBdev2' 00:10:00.290 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:00.290 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:00.290 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:00.290 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:00.290 "name": "BaseBdev1", 00:10:00.290 "aliases": [ 00:10:00.290 "799b1bf6-20cc-47a8-a5bd-dfefaa489bb5" 00:10:00.290 ], 00:10:00.290 "product_name": "Malloc disk", 00:10:00.290 "block_size": 512, 00:10:00.290 "num_blocks": 65536, 00:10:00.290 "uuid": "799b1bf6-20cc-47a8-a5bd-dfefaa489bb5", 00:10:00.290 "assigned_rate_limits": { 00:10:00.290 "rw_ios_per_sec": 0, 00:10:00.290 "rw_mbytes_per_sec": 0, 00:10:00.290 "r_mbytes_per_sec": 0, 00:10:00.290 "w_mbytes_per_sec": 0 00:10:00.290 }, 00:10:00.290 "claimed": true, 00:10:00.290 "claim_type": "exclusive_write", 00:10:00.290 "zoned": false, 00:10:00.290 "supported_io_types": { 00:10:00.290 "read": true, 00:10:00.290 "write": true, 00:10:00.290 "unmap": true, 00:10:00.290 "flush": true, 00:10:00.290 "reset": true, 00:10:00.290 "nvme_admin": false, 00:10:00.290 "nvme_io": false, 00:10:00.290 "nvme_io_md": false, 00:10:00.290 "write_zeroes": true, 00:10:00.290 "zcopy": true, 00:10:00.290 "get_zone_info": false, 00:10:00.290 "zone_management": false, 00:10:00.290 "zone_append": false, 00:10:00.290 "compare": false, 00:10:00.290 "compare_and_write": false, 00:10:00.290 "abort": true, 00:10:00.290 "seek_hole": false, 00:10:00.290 "seek_data": false, 00:10:00.290 "copy": true, 00:10:00.290 "nvme_iov_md": false 00:10:00.290 }, 00:10:00.290 "memory_domains": [ 00:10:00.290 { 00:10:00.290 "dma_device_id": "system", 00:10:00.290 "dma_device_type": 1 00:10:00.290 }, 00:10:00.290 { 00:10:00.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.290 "dma_device_type": 2 00:10:00.290 } 00:10:00.290 ], 00:10:00.290 "driver_specific": {} 00:10:00.290 }' 00:10:00.290 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:00.550 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:00.550 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:00.550 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:00.550 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:00.550 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:00.550 18:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:00.550 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:00.550 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:00.550 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:00.550 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:00.809 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:00.809 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:00.809 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:00.809 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:00.809 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:00.809 "name": "BaseBdev2", 00:10:00.809 "aliases": [ 00:10:00.809 "e3474905-b134-4c88-bdba-2a3ba9839ab3" 00:10:00.810 ], 00:10:00.810 "product_name": "Malloc disk", 00:10:00.810 "block_size": 512, 00:10:00.810 "num_blocks": 65536, 00:10:00.810 "uuid": "e3474905-b134-4c88-bdba-2a3ba9839ab3", 00:10:00.810 "assigned_rate_limits": { 00:10:00.810 "rw_ios_per_sec": 0, 00:10:00.810 "rw_mbytes_per_sec": 0, 00:10:00.810 "r_mbytes_per_sec": 0, 00:10:00.810 "w_mbytes_per_sec": 0 00:10:00.810 }, 00:10:00.810 "claimed": true, 00:10:00.810 "claim_type": "exclusive_write", 00:10:00.810 "zoned": false, 00:10:00.810 "supported_io_types": { 00:10:00.810 "read": true, 00:10:00.810 "write": true, 00:10:00.810 "unmap": true, 00:10:00.810 "flush": true, 00:10:00.810 "reset": true, 00:10:00.810 "nvme_admin": false, 00:10:00.810 "nvme_io": false, 00:10:00.810 "nvme_io_md": false, 00:10:00.810 "write_zeroes": true, 00:10:00.810 "zcopy": true, 00:10:00.810 "get_zone_info": false, 00:10:00.810 "zone_management": false, 00:10:00.810 "zone_append": false, 00:10:00.810 "compare": false, 00:10:00.810 "compare_and_write": false, 00:10:00.810 "abort": true, 00:10:00.810 "seek_hole": false, 00:10:00.810 "seek_data": false, 00:10:00.810 "copy": true, 00:10:00.810 "nvme_iov_md": false 00:10:00.810 }, 00:10:00.810 "memory_domains": [ 00:10:00.810 { 00:10:00.810 "dma_device_id": "system", 00:10:00.810 "dma_device_type": 1 00:10:00.810 }, 00:10:00.810 { 00:10:00.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.810 "dma_device_type": 2 00:10:00.810 } 00:10:00.810 ], 00:10:00.810 "driver_specific": {} 00:10:00.810 }' 00:10:00.810 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:00.810 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:01.068 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:01.068 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:01.068 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:01.068 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:01.068 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:01.068 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:01.068 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:01.068 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:01.068 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:01.068 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:01.068 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:01.328 [2024-07-24 18:13:09.808814] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.328 [2024-07-24 18:13:09.808837] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.328 [2024-07-24 18:13:09.808867] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.329 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.588 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:01.588 "name": "Existed_Raid", 00:10:01.588 "uuid": "3bf83b1d-35b0-4809-b73f-4cf3f6dd2c78", 00:10:01.588 "strip_size_kb": 64, 00:10:01.588 "state": "offline", 00:10:01.588 "raid_level": "raid0", 00:10:01.588 "superblock": true, 00:10:01.588 "num_base_bdevs": 2, 00:10:01.588 "num_base_bdevs_discovered": 1, 00:10:01.588 "num_base_bdevs_operational": 1, 00:10:01.588 "base_bdevs_list": [ 00:10:01.588 { 00:10:01.588 "name": null, 00:10:01.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.588 "is_configured": false, 00:10:01.588 "data_offset": 2048, 00:10:01.588 "data_size": 63488 00:10:01.588 }, 00:10:01.588 { 00:10:01.588 "name": "BaseBdev2", 00:10:01.588 "uuid": "e3474905-b134-4c88-bdba-2a3ba9839ab3", 00:10:01.588 "is_configured": true, 00:10:01.588 "data_offset": 2048, 00:10:01.588 "data_size": 63488 00:10:01.588 } 00:10:01.588 ] 00:10:01.588 }' 00:10:01.588 18:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:01.588 18:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.157 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:02.157 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:02.157 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:02.157 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:02.157 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:02.157 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.157 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:02.417 [2024-07-24 18:13:10.800251] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.417 [2024-07-24 18:13:10.800290] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1963880 name Existed_Raid, state offline 00:10:02.417 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:02.417 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:02.417 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:02.417 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:02.417 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:02.417 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:02.417 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:02.417 18:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2151678 00:10:02.417 18:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2151678 ']' 00:10:02.417 18:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2151678 00:10:02.417 18:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:02.417 18:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.417 18:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2151678 00:10:02.676 18:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.676 18:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.676 18:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2151678' 00:10:02.676 killing process with pid 2151678 00:10:02.676 18:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2151678 00:10:02.676 [2024-07-24 18:13:11.054863] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.676 18:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2151678 00:10:02.676 [2024-07-24 18:13:11.055671] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.676 18:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:02.676 00:10:02.676 real 0m8.207s 00:10:02.676 user 0m14.421s 00:10:02.676 sys 0m1.643s 00:10:02.676 18:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.676 18:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.676 ************************************ 00:10:02.676 END TEST raid_state_function_test_sb 00:10:02.676 ************************************ 00:10:02.676 18:13:11 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:02.676 18:13:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:02.676 18:13:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.676 18:13:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.936 ************************************ 00:10:02.936 START TEST raid_superblock_test 00:10:02.936 ************************************ 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2153249 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2153249 /var/tmp/spdk-raid.sock 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2153249 ']' 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:02.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.937 18:13:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.937 [2024-07-24 18:13:11.364501] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:10:02.937 [2024-07-24 18:13:11.364549] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2153249 ] 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:01.0 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:01.1 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:01.2 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:01.3 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:01.4 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:01.5 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:01.6 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:01.7 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:02.0 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:02.1 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:02.2 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:02.3 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:02.4 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:02.5 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:02.6 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b3:02.7 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:01.0 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:01.1 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:01.2 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:01.3 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:01.4 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:01.5 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:01.6 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:01.7 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:02.0 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:02.1 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:02.2 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:02.3 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:02.4 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:02.5 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:02.6 cannot be used 00:10:02.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.937 EAL: Requested device 0000:b5:02.7 cannot be used 00:10:02.937 [2024-07-24 18:13:11.456949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.937 [2024-07-24 18:13:11.531464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.197 [2024-07-24 18:13:11.589324] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.197 [2024-07-24 18:13:11.589348] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.765 18:13:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:03.765 18:13:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:03.765 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:03.765 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:03.765 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:03.765 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:03.765 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:03.765 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:03.765 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:03.765 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:03.765 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:03.765 malloc1 00:10:03.765 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:04.024 [2024-07-24 18:13:12.489439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:04.024 [2024-07-24 18:13:12.489474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.024 [2024-07-24 18:13:12.489488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x260acb0 00:10:04.024 [2024-07-24 18:13:12.489495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.024 [2024-07-24 18:13:12.490578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.024 [2024-07-24 18:13:12.490602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:04.024 pt1 00:10:04.024 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:04.024 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:04.024 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:04.024 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:04.024 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:04.024 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:04.024 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:04.024 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:04.024 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:04.283 malloc2 00:10:04.283 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.283 [2024-07-24 18:13:12.814036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.283 [2024-07-24 18:13:12.814066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.283 [2024-07-24 18:13:12.814077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x260c0b0 00:10:04.283 [2024-07-24 18:13:12.814085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.283 [2024-07-24 18:13:12.815128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.283 [2024-07-24 18:13:12.815150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.283 pt2 00:10:04.283 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:04.283 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:04.283 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:10:04.542 [2024-07-24 18:13:12.974466] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:04.542 [2024-07-24 18:13:12.975343] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.542 [2024-07-24 18:13:12.975445] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x27ae9b0 00:10:04.542 [2024-07-24 18:13:12.975454] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:04.542 [2024-07-24 18:13:12.975586] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27a4350 00:10:04.542 [2024-07-24 18:13:12.975703] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x27ae9b0 00:10:04.542 [2024-07-24 18:13:12.975710] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x27ae9b0 00:10:04.542 [2024-07-24 18:13:12.975773] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.542 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:04.542 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:04.542 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:04.542 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:04.542 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:04.542 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:04.542 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:04.542 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:04.542 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:04.542 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:04.542 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.542 18:13:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.801 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:04.802 "name": "raid_bdev1", 00:10:04.802 "uuid": "f361beb3-5ea3-46ca-a9dc-632ae0d8f4be", 00:10:04.802 "strip_size_kb": 64, 00:10:04.802 "state": "online", 00:10:04.802 "raid_level": "raid0", 00:10:04.802 "superblock": true, 00:10:04.802 "num_base_bdevs": 2, 00:10:04.802 "num_base_bdevs_discovered": 2, 00:10:04.802 "num_base_bdevs_operational": 2, 00:10:04.802 "base_bdevs_list": [ 00:10:04.802 { 00:10:04.802 "name": "pt1", 00:10:04.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.802 "is_configured": true, 00:10:04.802 "data_offset": 2048, 00:10:04.802 "data_size": 63488 00:10:04.802 }, 00:10:04.802 { 00:10:04.802 "name": "pt2", 00:10:04.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.802 "is_configured": true, 00:10:04.802 "data_offset": 2048, 00:10:04.802 "data_size": 63488 00:10:04.802 } 00:10:04.802 ] 00:10:04.802 }' 00:10:04.802 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:04.802 18:13:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.367 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:05.367 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:05.367 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:05.367 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:05.367 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:05.367 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:05.367 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:05.367 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:05.367 [2024-07-24 18:13:13.812768] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.367 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:05.367 "name": "raid_bdev1", 00:10:05.367 "aliases": [ 00:10:05.367 "f361beb3-5ea3-46ca-a9dc-632ae0d8f4be" 00:10:05.367 ], 00:10:05.367 "product_name": "Raid Volume", 00:10:05.367 "block_size": 512, 00:10:05.367 "num_blocks": 126976, 00:10:05.367 "uuid": "f361beb3-5ea3-46ca-a9dc-632ae0d8f4be", 00:10:05.367 "assigned_rate_limits": { 00:10:05.367 "rw_ios_per_sec": 0, 00:10:05.367 "rw_mbytes_per_sec": 0, 00:10:05.367 "r_mbytes_per_sec": 0, 00:10:05.367 "w_mbytes_per_sec": 0 00:10:05.367 }, 00:10:05.367 "claimed": false, 00:10:05.367 "zoned": false, 00:10:05.367 "supported_io_types": { 00:10:05.367 "read": true, 00:10:05.367 "write": true, 00:10:05.367 "unmap": true, 00:10:05.367 "flush": true, 00:10:05.367 "reset": true, 00:10:05.367 "nvme_admin": false, 00:10:05.367 "nvme_io": false, 00:10:05.367 "nvme_io_md": false, 00:10:05.367 "write_zeroes": true, 00:10:05.367 "zcopy": false, 00:10:05.367 "get_zone_info": false, 00:10:05.367 "zone_management": false, 00:10:05.367 "zone_append": false, 00:10:05.367 "compare": false, 00:10:05.367 "compare_and_write": false, 00:10:05.367 "abort": false, 00:10:05.367 "seek_hole": false, 00:10:05.367 "seek_data": false, 00:10:05.367 "copy": false, 00:10:05.367 "nvme_iov_md": false 00:10:05.367 }, 00:10:05.367 "memory_domains": [ 00:10:05.367 { 00:10:05.367 "dma_device_id": "system", 00:10:05.367 "dma_device_type": 1 00:10:05.367 }, 00:10:05.367 { 00:10:05.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.367 "dma_device_type": 2 00:10:05.367 }, 00:10:05.367 { 00:10:05.367 "dma_device_id": "system", 00:10:05.367 "dma_device_type": 1 00:10:05.367 }, 00:10:05.367 { 00:10:05.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.367 "dma_device_type": 2 00:10:05.367 } 00:10:05.367 ], 00:10:05.367 "driver_specific": { 00:10:05.367 "raid": { 00:10:05.367 "uuid": "f361beb3-5ea3-46ca-a9dc-632ae0d8f4be", 00:10:05.368 "strip_size_kb": 64, 00:10:05.368 "state": "online", 00:10:05.368 "raid_level": "raid0", 00:10:05.368 "superblock": true, 00:10:05.368 "num_base_bdevs": 2, 00:10:05.368 "num_base_bdevs_discovered": 2, 00:10:05.368 "num_base_bdevs_operational": 2, 00:10:05.368 "base_bdevs_list": [ 00:10:05.368 { 00:10:05.368 "name": "pt1", 00:10:05.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:05.368 "is_configured": true, 00:10:05.368 "data_offset": 2048, 00:10:05.368 "data_size": 63488 00:10:05.368 }, 00:10:05.368 { 00:10:05.368 "name": "pt2", 00:10:05.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.368 "is_configured": true, 00:10:05.368 "data_offset": 2048, 00:10:05.368 "data_size": 63488 00:10:05.368 } 00:10:05.368 ] 00:10:05.368 } 00:10:05.368 } 00:10:05.368 }' 00:10:05.368 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.368 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:05.368 pt2' 00:10:05.368 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:05.368 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:05.368 18:13:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:05.627 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:05.627 "name": "pt1", 00:10:05.627 "aliases": [ 00:10:05.627 "00000000-0000-0000-0000-000000000001" 00:10:05.627 ], 00:10:05.627 "product_name": "passthru", 00:10:05.627 "block_size": 512, 00:10:05.627 "num_blocks": 65536, 00:10:05.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:05.627 "assigned_rate_limits": { 00:10:05.627 "rw_ios_per_sec": 0, 00:10:05.627 "rw_mbytes_per_sec": 0, 00:10:05.627 "r_mbytes_per_sec": 0, 00:10:05.627 "w_mbytes_per_sec": 0 00:10:05.627 }, 00:10:05.627 "claimed": true, 00:10:05.627 "claim_type": "exclusive_write", 00:10:05.627 "zoned": false, 00:10:05.627 "supported_io_types": { 00:10:05.627 "read": true, 00:10:05.627 "write": true, 00:10:05.627 "unmap": true, 00:10:05.627 "flush": true, 00:10:05.627 "reset": true, 00:10:05.627 "nvme_admin": false, 00:10:05.627 "nvme_io": false, 00:10:05.627 "nvme_io_md": false, 00:10:05.627 "write_zeroes": true, 00:10:05.627 "zcopy": true, 00:10:05.627 "get_zone_info": false, 00:10:05.627 "zone_management": false, 00:10:05.627 "zone_append": false, 00:10:05.627 "compare": false, 00:10:05.627 "compare_and_write": false, 00:10:05.627 "abort": true, 00:10:05.627 "seek_hole": false, 00:10:05.627 "seek_data": false, 00:10:05.627 "copy": true, 00:10:05.627 "nvme_iov_md": false 00:10:05.627 }, 00:10:05.627 "memory_domains": [ 00:10:05.627 { 00:10:05.627 "dma_device_id": "system", 00:10:05.627 "dma_device_type": 1 00:10:05.627 }, 00:10:05.627 { 00:10:05.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.627 "dma_device_type": 2 00:10:05.627 } 00:10:05.627 ], 00:10:05.627 "driver_specific": { 00:10:05.627 "passthru": { 00:10:05.627 "name": "pt1", 00:10:05.627 "base_bdev_name": "malloc1" 00:10:05.627 } 00:10:05.627 } 00:10:05.627 }' 00:10:05.627 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:05.627 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:05.627 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:05.627 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:05.627 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:05.627 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:05.886 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:05.886 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:05.886 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:05.886 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:05.886 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:05.886 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:05.886 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:05.886 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:05.886 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:06.146 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:06.146 "name": "pt2", 00:10:06.146 "aliases": [ 00:10:06.146 "00000000-0000-0000-0000-000000000002" 00:10:06.146 ], 00:10:06.146 "product_name": "passthru", 00:10:06.146 "block_size": 512, 00:10:06.146 "num_blocks": 65536, 00:10:06.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.146 "assigned_rate_limits": { 00:10:06.146 "rw_ios_per_sec": 0, 00:10:06.146 "rw_mbytes_per_sec": 0, 00:10:06.146 "r_mbytes_per_sec": 0, 00:10:06.146 "w_mbytes_per_sec": 0 00:10:06.146 }, 00:10:06.146 "claimed": true, 00:10:06.146 "claim_type": "exclusive_write", 00:10:06.146 "zoned": false, 00:10:06.146 "supported_io_types": { 00:10:06.146 "read": true, 00:10:06.146 "write": true, 00:10:06.146 "unmap": true, 00:10:06.146 "flush": true, 00:10:06.146 "reset": true, 00:10:06.146 "nvme_admin": false, 00:10:06.146 "nvme_io": false, 00:10:06.146 "nvme_io_md": false, 00:10:06.146 "write_zeroes": true, 00:10:06.146 "zcopy": true, 00:10:06.146 "get_zone_info": false, 00:10:06.146 "zone_management": false, 00:10:06.146 "zone_append": false, 00:10:06.146 "compare": false, 00:10:06.146 "compare_and_write": false, 00:10:06.146 "abort": true, 00:10:06.146 "seek_hole": false, 00:10:06.146 "seek_data": false, 00:10:06.146 "copy": true, 00:10:06.146 "nvme_iov_md": false 00:10:06.146 }, 00:10:06.146 "memory_domains": [ 00:10:06.146 { 00:10:06.146 "dma_device_id": "system", 00:10:06.146 "dma_device_type": 1 00:10:06.146 }, 00:10:06.146 { 00:10:06.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.146 "dma_device_type": 2 00:10:06.146 } 00:10:06.146 ], 00:10:06.146 "driver_specific": { 00:10:06.146 "passthru": { 00:10:06.146 "name": "pt2", 00:10:06.146 "base_bdev_name": "malloc2" 00:10:06.146 } 00:10:06.146 } 00:10:06.146 }' 00:10:06.146 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:06.146 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:06.146 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:06.146 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:06.146 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:06.146 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:06.146 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:06.405 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:06.405 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:06.405 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:06.405 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:06.405 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:06.405 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:06.405 18:13:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:06.664 [2024-07-24 18:13:15.019871] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.664 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f361beb3-5ea3-46ca-a9dc-632ae0d8f4be 00:10:06.664 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f361beb3-5ea3-46ca-a9dc-632ae0d8f4be ']' 00:10:06.664 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:06.664 [2024-07-24 18:13:15.192155] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.664 [2024-07-24 18:13:15.192167] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.664 [2024-07-24 18:13:15.192204] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.664 [2024-07-24 18:13:15.192235] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.664 [2024-07-24 18:13:15.192242] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27ae9b0 name raid_bdev1, state offline 00:10:06.664 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.664 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:06.923 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:06.923 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:06.923 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.923 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:07.183 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:07.183 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:07.183 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:07.183 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:07.442 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:07.442 18:13:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:07.442 18:13:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:07.442 18:13:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:07.442 18:13:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:07.442 18:13:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.442 18:13:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:07.442 18:13:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.442 18:13:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:07.442 18:13:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:07.442 18:13:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:07.442 18:13:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:07.442 18:13:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:10:07.701 [2024-07-24 18:13:16.046341] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:07.701 [2024-07-24 18:13:16.047303] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:07.701 [2024-07-24 18:13:16.047345] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:07.701 [2024-07-24 18:13:16.047373] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:07.701 [2024-07-24 18:13:16.047385] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.701 [2024-07-24 18:13:16.047391] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x27ae730 name raid_bdev1, state configuring 00:10:07.701 request: 00:10:07.701 { 00:10:07.701 "name": "raid_bdev1", 00:10:07.701 "raid_level": "raid0", 00:10:07.701 "base_bdevs": [ 00:10:07.701 "malloc1", 00:10:07.701 "malloc2" 00:10:07.701 ], 00:10:07.701 "strip_size_kb": 64, 00:10:07.701 "superblock": false, 00:10:07.701 "method": "bdev_raid_create", 00:10:07.701 "req_id": 1 00:10:07.701 } 00:10:07.701 Got JSON-RPC error response 00:10:07.701 response: 00:10:07.702 { 00:10:07.702 "code": -17, 00:10:07.702 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:07.702 } 00:10:07.702 18:13:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:07.702 18:13:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:07.702 18:13:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:07.702 18:13:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:07.702 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.702 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:07.702 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:07.702 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:07.702 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:07.961 [2024-07-24 18:13:16.379171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:07.961 [2024-07-24 18:13:16.379201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.961 [2024-07-24 18:13:16.379215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x260aee0 00:10:07.961 [2024-07-24 18:13:16.379224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.961 [2024-07-24 18:13:16.380336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.961 [2024-07-24 18:13:16.380358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:07.961 [2024-07-24 18:13:16.380404] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:07.961 [2024-07-24 18:13:16.380421] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:07.961 pt1 00:10:07.961 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:07.961 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:07.961 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:07.961 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:07.961 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:07.961 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:07.961 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:07.961 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:07.961 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:07.961 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:07.961 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.961 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.220 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:08.220 "name": "raid_bdev1", 00:10:08.220 "uuid": "f361beb3-5ea3-46ca-a9dc-632ae0d8f4be", 00:10:08.220 "strip_size_kb": 64, 00:10:08.220 "state": "configuring", 00:10:08.220 "raid_level": "raid0", 00:10:08.220 "superblock": true, 00:10:08.220 "num_base_bdevs": 2, 00:10:08.220 "num_base_bdevs_discovered": 1, 00:10:08.220 "num_base_bdevs_operational": 2, 00:10:08.220 "base_bdevs_list": [ 00:10:08.220 { 00:10:08.220 "name": "pt1", 00:10:08.220 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.220 "is_configured": true, 00:10:08.220 "data_offset": 2048, 00:10:08.220 "data_size": 63488 00:10:08.220 }, 00:10:08.220 { 00:10:08.220 "name": null, 00:10:08.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.220 "is_configured": false, 00:10:08.220 "data_offset": 2048, 00:10:08.220 "data_size": 63488 00:10:08.220 } 00:10:08.220 ] 00:10:08.220 }' 00:10:08.220 18:13:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:08.220 18:13:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.479 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:10:08.479 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:08.479 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:08.479 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:08.738 [2024-07-24 18:13:17.209313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:08.738 [2024-07-24 18:13:17.209342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.738 [2024-07-24 18:13:17.209354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27a4f60 00:10:08.738 [2024-07-24 18:13:17.209362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.738 [2024-07-24 18:13:17.209590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.738 [2024-07-24 18:13:17.209602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:08.738 [2024-07-24 18:13:17.209648] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:08.738 [2024-07-24 18:13:17.209661] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:08.738 [2024-07-24 18:13:17.209722] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2601600 00:10:08.738 [2024-07-24 18:13:17.209729] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:08.738 [2024-07-24 18:13:17.209837] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2603c70 00:10:08.738 [2024-07-24 18:13:17.209913] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2601600 00:10:08.738 [2024-07-24 18:13:17.209920] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2601600 00:10:08.738 [2024-07-24 18:13:17.209983] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.738 pt2 00:10:08.738 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:08.738 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:08.738 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:08.738 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:08.738 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:08.738 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:08.738 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:08.738 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:08.738 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:08.738 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:08.738 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:08.739 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:08.739 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.739 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.998 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:08.998 "name": "raid_bdev1", 00:10:08.998 "uuid": "f361beb3-5ea3-46ca-a9dc-632ae0d8f4be", 00:10:08.998 "strip_size_kb": 64, 00:10:08.998 "state": "online", 00:10:08.998 "raid_level": "raid0", 00:10:08.998 "superblock": true, 00:10:08.998 "num_base_bdevs": 2, 00:10:08.998 "num_base_bdevs_discovered": 2, 00:10:08.998 "num_base_bdevs_operational": 2, 00:10:08.998 "base_bdevs_list": [ 00:10:08.998 { 00:10:08.998 "name": "pt1", 00:10:08.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.998 "is_configured": true, 00:10:08.998 "data_offset": 2048, 00:10:08.998 "data_size": 63488 00:10:08.998 }, 00:10:08.998 { 00:10:08.998 "name": "pt2", 00:10:08.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.998 "is_configured": true, 00:10:08.998 "data_offset": 2048, 00:10:08.998 "data_size": 63488 00:10:08.998 } 00:10:08.998 ] 00:10:08.998 }' 00:10:08.998 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:08.998 18:13:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.567 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:09.567 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:09.567 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:09.567 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:09.567 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:09.567 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:09.567 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:09.567 18:13:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:09.567 [2024-07-24 18:13:18.039761] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.567 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:09.567 "name": "raid_bdev1", 00:10:09.567 "aliases": [ 00:10:09.567 "f361beb3-5ea3-46ca-a9dc-632ae0d8f4be" 00:10:09.567 ], 00:10:09.567 "product_name": "Raid Volume", 00:10:09.567 "block_size": 512, 00:10:09.567 "num_blocks": 126976, 00:10:09.567 "uuid": "f361beb3-5ea3-46ca-a9dc-632ae0d8f4be", 00:10:09.567 "assigned_rate_limits": { 00:10:09.567 "rw_ios_per_sec": 0, 00:10:09.567 "rw_mbytes_per_sec": 0, 00:10:09.567 "r_mbytes_per_sec": 0, 00:10:09.567 "w_mbytes_per_sec": 0 00:10:09.567 }, 00:10:09.567 "claimed": false, 00:10:09.567 "zoned": false, 00:10:09.567 "supported_io_types": { 00:10:09.567 "read": true, 00:10:09.567 "write": true, 00:10:09.567 "unmap": true, 00:10:09.567 "flush": true, 00:10:09.567 "reset": true, 00:10:09.567 "nvme_admin": false, 00:10:09.567 "nvme_io": false, 00:10:09.567 "nvme_io_md": false, 00:10:09.567 "write_zeroes": true, 00:10:09.567 "zcopy": false, 00:10:09.567 "get_zone_info": false, 00:10:09.567 "zone_management": false, 00:10:09.567 "zone_append": false, 00:10:09.567 "compare": false, 00:10:09.567 "compare_and_write": false, 00:10:09.567 "abort": false, 00:10:09.567 "seek_hole": false, 00:10:09.567 "seek_data": false, 00:10:09.567 "copy": false, 00:10:09.567 "nvme_iov_md": false 00:10:09.567 }, 00:10:09.567 "memory_domains": [ 00:10:09.567 { 00:10:09.567 "dma_device_id": "system", 00:10:09.567 "dma_device_type": 1 00:10:09.567 }, 00:10:09.567 { 00:10:09.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.567 "dma_device_type": 2 00:10:09.567 }, 00:10:09.567 { 00:10:09.567 "dma_device_id": "system", 00:10:09.567 "dma_device_type": 1 00:10:09.567 }, 00:10:09.567 { 00:10:09.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.567 "dma_device_type": 2 00:10:09.567 } 00:10:09.567 ], 00:10:09.567 "driver_specific": { 00:10:09.567 "raid": { 00:10:09.567 "uuid": "f361beb3-5ea3-46ca-a9dc-632ae0d8f4be", 00:10:09.567 "strip_size_kb": 64, 00:10:09.567 "state": "online", 00:10:09.567 "raid_level": "raid0", 00:10:09.567 "superblock": true, 00:10:09.567 "num_base_bdevs": 2, 00:10:09.567 "num_base_bdevs_discovered": 2, 00:10:09.567 "num_base_bdevs_operational": 2, 00:10:09.567 "base_bdevs_list": [ 00:10:09.567 { 00:10:09.567 "name": "pt1", 00:10:09.567 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.567 "is_configured": true, 00:10:09.567 "data_offset": 2048, 00:10:09.567 "data_size": 63488 00:10:09.567 }, 00:10:09.567 { 00:10:09.567 "name": "pt2", 00:10:09.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.567 "is_configured": true, 00:10:09.567 "data_offset": 2048, 00:10:09.567 "data_size": 63488 00:10:09.567 } 00:10:09.567 ] 00:10:09.567 } 00:10:09.567 } 00:10:09.567 }' 00:10:09.567 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.567 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:09.567 pt2' 00:10:09.567 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:09.567 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:09.567 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:09.827 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:09.827 "name": "pt1", 00:10:09.827 "aliases": [ 00:10:09.827 "00000000-0000-0000-0000-000000000001" 00:10:09.827 ], 00:10:09.827 "product_name": "passthru", 00:10:09.827 "block_size": 512, 00:10:09.827 "num_blocks": 65536, 00:10:09.827 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.827 "assigned_rate_limits": { 00:10:09.827 "rw_ios_per_sec": 0, 00:10:09.827 "rw_mbytes_per_sec": 0, 00:10:09.827 "r_mbytes_per_sec": 0, 00:10:09.827 "w_mbytes_per_sec": 0 00:10:09.827 }, 00:10:09.827 "claimed": true, 00:10:09.827 "claim_type": "exclusive_write", 00:10:09.827 "zoned": false, 00:10:09.827 "supported_io_types": { 00:10:09.827 "read": true, 00:10:09.827 "write": true, 00:10:09.827 "unmap": true, 00:10:09.827 "flush": true, 00:10:09.827 "reset": true, 00:10:09.827 "nvme_admin": false, 00:10:09.827 "nvme_io": false, 00:10:09.827 "nvme_io_md": false, 00:10:09.827 "write_zeroes": true, 00:10:09.827 "zcopy": true, 00:10:09.827 "get_zone_info": false, 00:10:09.827 "zone_management": false, 00:10:09.827 "zone_append": false, 00:10:09.827 "compare": false, 00:10:09.827 "compare_and_write": false, 00:10:09.827 "abort": true, 00:10:09.827 "seek_hole": false, 00:10:09.827 "seek_data": false, 00:10:09.827 "copy": true, 00:10:09.827 "nvme_iov_md": false 00:10:09.827 }, 00:10:09.827 "memory_domains": [ 00:10:09.827 { 00:10:09.827 "dma_device_id": "system", 00:10:09.827 "dma_device_type": 1 00:10:09.827 }, 00:10:09.827 { 00:10:09.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.827 "dma_device_type": 2 00:10:09.827 } 00:10:09.827 ], 00:10:09.827 "driver_specific": { 00:10:09.827 "passthru": { 00:10:09.827 "name": "pt1", 00:10:09.827 "base_bdev_name": "malloc1" 00:10:09.827 } 00:10:09.827 } 00:10:09.827 }' 00:10:09.827 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:09.827 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:09.827 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:09.827 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:09.827 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:10.087 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:10.087 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:10.087 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:10.087 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:10.087 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:10.087 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:10.087 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:10.087 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:10.087 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:10.087 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:10.345 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:10.345 "name": "pt2", 00:10:10.345 "aliases": [ 00:10:10.345 "00000000-0000-0000-0000-000000000002" 00:10:10.345 ], 00:10:10.345 "product_name": "passthru", 00:10:10.345 "block_size": 512, 00:10:10.345 "num_blocks": 65536, 00:10:10.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.345 "assigned_rate_limits": { 00:10:10.346 "rw_ios_per_sec": 0, 00:10:10.346 "rw_mbytes_per_sec": 0, 00:10:10.346 "r_mbytes_per_sec": 0, 00:10:10.346 "w_mbytes_per_sec": 0 00:10:10.346 }, 00:10:10.346 "claimed": true, 00:10:10.346 "claim_type": "exclusive_write", 00:10:10.346 "zoned": false, 00:10:10.346 "supported_io_types": { 00:10:10.346 "read": true, 00:10:10.346 "write": true, 00:10:10.346 "unmap": true, 00:10:10.346 "flush": true, 00:10:10.346 "reset": true, 00:10:10.346 "nvme_admin": false, 00:10:10.346 "nvme_io": false, 00:10:10.346 "nvme_io_md": false, 00:10:10.346 "write_zeroes": true, 00:10:10.346 "zcopy": true, 00:10:10.346 "get_zone_info": false, 00:10:10.346 "zone_management": false, 00:10:10.346 "zone_append": false, 00:10:10.346 "compare": false, 00:10:10.346 "compare_and_write": false, 00:10:10.346 "abort": true, 00:10:10.346 "seek_hole": false, 00:10:10.346 "seek_data": false, 00:10:10.346 "copy": true, 00:10:10.346 "nvme_iov_md": false 00:10:10.346 }, 00:10:10.346 "memory_domains": [ 00:10:10.346 { 00:10:10.346 "dma_device_id": "system", 00:10:10.346 "dma_device_type": 1 00:10:10.346 }, 00:10:10.346 { 00:10:10.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.346 "dma_device_type": 2 00:10:10.346 } 00:10:10.346 ], 00:10:10.346 "driver_specific": { 00:10:10.346 "passthru": { 00:10:10.346 "name": "pt2", 00:10:10.346 "base_bdev_name": "malloc2" 00:10:10.346 } 00:10:10.346 } 00:10:10.346 }' 00:10:10.346 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:10.346 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:10.346 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:10.346 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:10.346 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:10.346 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:10.346 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:10.346 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:10.604 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:10.604 18:13:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:10.604 18:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:10.604 18:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:10.604 18:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:10.604 18:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:10.605 [2024-07-24 18:13:19.186697] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.863 18:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f361beb3-5ea3-46ca-a9dc-632ae0d8f4be '!=' f361beb3-5ea3-46ca-a9dc-632ae0d8f4be ']' 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2153249 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2153249 ']' 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2153249 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2153249 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2153249' 00:10:10.864 killing process with pid 2153249 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2153249 00:10:10.864 [2024-07-24 18:13:19.261123] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.864 [2024-07-24 18:13:19.261163] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.864 [2024-07-24 18:13:19.261193] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.864 [2024-07-24 18:13:19.261201] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2601600 name raid_bdev1, state offline 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2153249 00:10:10.864 [2024-07-24 18:13:19.276227] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:10.864 00:10:10.864 real 0m8.134s 00:10:10.864 user 0m14.338s 00:10:10.864 sys 0m1.644s 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.864 18:13:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.864 ************************************ 00:10:10.864 END TEST raid_superblock_test 00:10:10.864 ************************************ 00:10:11.123 18:13:19 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:11.123 18:13:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:11.123 18:13:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.123 18:13:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.123 ************************************ 00:10:11.123 START TEST raid_read_error_test 00:10:11.123 ************************************ 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:11.123 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:11.124 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.LfqftJYrNm 00:10:11.124 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:11.124 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2154865 00:10:11.124 18:13:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2154865 /var/tmp/spdk-raid.sock 00:10:11.124 18:13:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2154865 ']' 00:10:11.124 18:13:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:11.124 18:13:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.124 18:13:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:11.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:11.124 18:13:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.124 18:13:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.124 [2024-07-24 18:13:19.574489] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:10:11.124 [2024-07-24 18:13:19.574534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2154865 ] 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:01.0 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:01.1 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:01.2 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:01.3 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:01.4 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:01.5 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:01.6 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:01.7 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:02.0 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:02.1 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:02.2 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:02.3 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:02.4 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:02.5 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:02.6 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b3:02.7 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:01.0 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:01.1 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:01.2 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:01.3 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:01.4 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:01.5 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:01.6 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:01.7 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:02.0 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:02.1 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:02.2 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:02.3 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:02.4 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:02.5 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:02.6 cannot be used 00:10:11.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:11.124 EAL: Requested device 0000:b5:02.7 cannot be used 00:10:11.124 [2024-07-24 18:13:19.666837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.383 [2024-07-24 18:13:19.740482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.383 [2024-07-24 18:13:19.794367] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.383 [2024-07-24 18:13:19.794396] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.951 18:13:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.951 18:13:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:11.951 18:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:11.952 18:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:11.952 BaseBdev1_malloc 00:10:11.952 18:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:12.211 true 00:10:12.211 18:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:12.469 [2024-07-24 18:13:20.863071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:12.469 [2024-07-24 18:13:20.863104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.469 [2024-07-24 18:13:20.863117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd5ed0 00:10:12.469 [2024-07-24 18:13:20.863125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.469 [2024-07-24 18:13:20.864266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.469 [2024-07-24 18:13:20.864290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:12.469 BaseBdev1 00:10:12.470 18:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:12.470 18:13:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:12.470 BaseBdev2_malloc 00:10:12.470 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:12.728 true 00:10:12.728 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:12.988 [2024-07-24 18:13:21.363876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:12.988 [2024-07-24 18:13:21.363910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.988 [2024-07-24 18:13:21.363925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fdab60 00:10:12.988 [2024-07-24 18:13:21.363934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.988 [2024-07-24 18:13:21.364985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.988 [2024-07-24 18:13:21.365008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:12.988 BaseBdev2 00:10:12.988 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:12.988 [2024-07-24 18:13:21.532338] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.988 [2024-07-24 18:13:21.533192] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.988 [2024-07-24 18:13:21.533318] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fdc790 00:10:12.988 [2024-07-24 18:13:21.533327] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:12.988 [2024-07-24 18:13:21.533450] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e30e30 00:10:12.988 [2024-07-24 18:13:21.533548] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fdc790 00:10:12.988 [2024-07-24 18:13:21.533554] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fdc790 00:10:12.988 [2024-07-24 18:13:21.533619] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.988 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:12.988 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:12.988 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:12.988 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:12.988 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:12.988 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:12.988 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:12.988 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:12.988 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:12.988 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:12.988 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.988 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.247 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:13.247 "name": "raid_bdev1", 00:10:13.247 "uuid": "35566297-d38e-42df-9daa-9439df875b9b", 00:10:13.247 "strip_size_kb": 64, 00:10:13.247 "state": "online", 00:10:13.247 "raid_level": "raid0", 00:10:13.247 "superblock": true, 00:10:13.247 "num_base_bdevs": 2, 00:10:13.247 "num_base_bdevs_discovered": 2, 00:10:13.247 "num_base_bdevs_operational": 2, 00:10:13.247 "base_bdevs_list": [ 00:10:13.247 { 00:10:13.247 "name": "BaseBdev1", 00:10:13.247 "uuid": "4cb9d014-96ee-51fc-acac-2666937cd1d2", 00:10:13.247 "is_configured": true, 00:10:13.247 "data_offset": 2048, 00:10:13.247 "data_size": 63488 00:10:13.247 }, 00:10:13.247 { 00:10:13.247 "name": "BaseBdev2", 00:10:13.247 "uuid": "5b0dbb8a-0eda-5301-b829-b7397b50c1b7", 00:10:13.247 "is_configured": true, 00:10:13.247 "data_offset": 2048, 00:10:13.247 "data_size": 63488 00:10:13.247 } 00:10:13.247 ] 00:10:13.247 }' 00:10:13.247 18:13:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:13.247 18:13:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.816 18:13:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:13.816 18:13:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:13.816 [2024-07-24 18:13:22.262429] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fd77c0 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.818 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.077 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:15.077 "name": "raid_bdev1", 00:10:15.077 "uuid": "35566297-d38e-42df-9daa-9439df875b9b", 00:10:15.077 "strip_size_kb": 64, 00:10:15.077 "state": "online", 00:10:15.077 "raid_level": "raid0", 00:10:15.077 "superblock": true, 00:10:15.077 "num_base_bdevs": 2, 00:10:15.077 "num_base_bdevs_discovered": 2, 00:10:15.077 "num_base_bdevs_operational": 2, 00:10:15.077 "base_bdevs_list": [ 00:10:15.077 { 00:10:15.077 "name": "BaseBdev1", 00:10:15.077 "uuid": "4cb9d014-96ee-51fc-acac-2666937cd1d2", 00:10:15.077 "is_configured": true, 00:10:15.077 "data_offset": 2048, 00:10:15.077 "data_size": 63488 00:10:15.077 }, 00:10:15.077 { 00:10:15.077 "name": "BaseBdev2", 00:10:15.077 "uuid": "5b0dbb8a-0eda-5301-b829-b7397b50c1b7", 00:10:15.077 "is_configured": true, 00:10:15.077 "data_offset": 2048, 00:10:15.077 "data_size": 63488 00:10:15.077 } 00:10:15.077 ] 00:10:15.077 }' 00:10:15.077 18:13:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:15.077 18:13:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.645 18:13:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:15.645 [2024-07-24 18:13:24.194534] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.645 [2024-07-24 18:13:24.194564] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.645 [2024-07-24 18:13:24.196651] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.645 [2024-07-24 18:13:24.196674] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.645 [2024-07-24 18:13:24.196694] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.645 [2024-07-24 18:13:24.196701] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fdc790 name raid_bdev1, state offline 00:10:15.645 0 00:10:15.645 18:13:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2154865 00:10:15.645 18:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2154865 ']' 00:10:15.645 18:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2154865 00:10:15.645 18:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:15.645 18:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.645 18:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2154865 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2154865' 00:10:15.905 killing process with pid 2154865 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2154865 00:10:15.905 [2024-07-24 18:13:24.273828] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2154865 00:10:15.905 [2024-07-24 18:13:24.283708] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.LfqftJYrNm 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:10:15.905 00:10:15.905 real 0m4.953s 00:10:15.905 user 0m7.442s 00:10:15.905 sys 0m0.860s 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.905 18:13:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.905 ************************************ 00:10:15.905 END TEST raid_read_error_test 00:10:15.905 ************************************ 00:10:16.165 18:13:24 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:16.165 18:13:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:16.165 18:13:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.165 18:13:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.165 ************************************ 00:10:16.165 START TEST raid_write_error_test 00:10:16.165 ************************************ 00:10:16.165 18:13:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:10:16.165 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:16.165 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:16.165 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:16.165 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:16.165 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:16.165 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:16.165 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:16.165 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:16.165 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:16.165 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:16.165 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.wNq7tY2uej 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2155956 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2155956 /var/tmp/spdk-raid.sock 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2155956 ']' 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:16.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.166 18:13:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.166 [2024-07-24 18:13:24.622825] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:10:16.166 [2024-07-24 18:13:24.622871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155956 ] 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:01.0 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:01.1 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:01.2 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:01.3 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:01.4 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:01.5 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:01.6 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:01.7 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:02.0 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:02.1 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:02.2 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:02.3 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:02.4 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:02.5 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:02.6 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b3:02.7 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:01.0 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:01.1 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:01.2 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:01.3 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:01.4 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:01.5 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:01.6 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:01.7 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:02.0 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:02.1 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:02.2 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:02.3 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:02.4 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:02.5 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:02.6 cannot be used 00:10:16.166 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.166 EAL: Requested device 0000:b5:02.7 cannot be used 00:10:16.166 [2024-07-24 18:13:24.715246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.425 [2024-07-24 18:13:24.789735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.425 [2024-07-24 18:13:24.841196] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.425 [2024-07-24 18:13:24.841226] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.993 18:13:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.993 18:13:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:16.993 18:13:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:16.993 18:13:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:16.993 BaseBdev1_malloc 00:10:17.252 18:13:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:17.252 true 00:10:17.252 18:13:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:17.511 [2024-07-24 18:13:25.889570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:17.511 [2024-07-24 18:13:25.889605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.511 [2024-07-24 18:13:25.889619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2707ed0 00:10:17.511 [2024-07-24 18:13:25.889633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.511 [2024-07-24 18:13:25.890787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.511 [2024-07-24 18:13:25.890811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:17.511 BaseBdev1 00:10:17.511 18:13:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:17.511 18:13:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:17.511 BaseBdev2_malloc 00:10:17.511 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:17.771 true 00:10:17.771 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:18.031 [2024-07-24 18:13:26.386481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:18.031 [2024-07-24 18:13:26.386514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.031 [2024-07-24 18:13:26.386528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x270cb60 00:10:18.031 [2024-07-24 18:13:26.386536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.031 [2024-07-24 18:13:26.387575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.031 [2024-07-24 18:13:26.387597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:18.031 BaseBdev2 00:10:18.031 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:18.031 [2024-07-24 18:13:26.538896] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.031 [2024-07-24 18:13:26.539676] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.031 [2024-07-24 18:13:26.539800] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x270e790 00:10:18.031 [2024-07-24 18:13:26.539809] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:18.031 [2024-07-24 18:13:26.539932] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2562e30 00:10:18.031 [2024-07-24 18:13:26.540023] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x270e790 00:10:18.031 [2024-07-24 18:13:26.540029] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x270e790 00:10:18.031 [2024-07-24 18:13:26.540091] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.031 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:18.031 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:18.031 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:18.031 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:18.031 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:18.031 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:18.031 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:18.031 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:18.031 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:18.031 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:18.031 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.031 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.290 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:18.290 "name": "raid_bdev1", 00:10:18.290 "uuid": "57edec9c-fea3-4731-8027-f251eab73f4e", 00:10:18.290 "strip_size_kb": 64, 00:10:18.290 "state": "online", 00:10:18.290 "raid_level": "raid0", 00:10:18.290 "superblock": true, 00:10:18.290 "num_base_bdevs": 2, 00:10:18.290 "num_base_bdevs_discovered": 2, 00:10:18.290 "num_base_bdevs_operational": 2, 00:10:18.290 "base_bdevs_list": [ 00:10:18.290 { 00:10:18.290 "name": "BaseBdev1", 00:10:18.290 "uuid": "ee152be5-60cb-5ba7-bc9c-8ad694ced962", 00:10:18.290 "is_configured": true, 00:10:18.290 "data_offset": 2048, 00:10:18.290 "data_size": 63488 00:10:18.290 }, 00:10:18.290 { 00:10:18.290 "name": "BaseBdev2", 00:10:18.290 "uuid": "00ddb3cd-e1d0-58c7-aaa4-998c084a4ac2", 00:10:18.290 "is_configured": true, 00:10:18.290 "data_offset": 2048, 00:10:18.290 "data_size": 63488 00:10:18.290 } 00:10:18.290 ] 00:10:18.290 }' 00:10:18.290 18:13:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:18.290 18:13:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.858 18:13:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:18.858 18:13:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:18.858 [2024-07-24 18:13:27.260969] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27097c0 00:10:19.792 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:19.792 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:19.792 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:19.792 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:19.792 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:19.792 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:19.792 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:19.792 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:19.793 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:19.793 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:19.793 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:19.793 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:19.793 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:19.793 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:19.793 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.793 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.050 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:20.050 "name": "raid_bdev1", 00:10:20.050 "uuid": "57edec9c-fea3-4731-8027-f251eab73f4e", 00:10:20.050 "strip_size_kb": 64, 00:10:20.050 "state": "online", 00:10:20.050 "raid_level": "raid0", 00:10:20.050 "superblock": true, 00:10:20.050 "num_base_bdevs": 2, 00:10:20.050 "num_base_bdevs_discovered": 2, 00:10:20.050 "num_base_bdevs_operational": 2, 00:10:20.050 "base_bdevs_list": [ 00:10:20.050 { 00:10:20.050 "name": "BaseBdev1", 00:10:20.050 "uuid": "ee152be5-60cb-5ba7-bc9c-8ad694ced962", 00:10:20.050 "is_configured": true, 00:10:20.050 "data_offset": 2048, 00:10:20.050 "data_size": 63488 00:10:20.050 }, 00:10:20.050 { 00:10:20.050 "name": "BaseBdev2", 00:10:20.050 "uuid": "00ddb3cd-e1d0-58c7-aaa4-998c084a4ac2", 00:10:20.050 "is_configured": true, 00:10:20.050 "data_offset": 2048, 00:10:20.050 "data_size": 63488 00:10:20.050 } 00:10:20.050 ] 00:10:20.050 }' 00:10:20.050 18:13:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:20.050 18:13:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.616 18:13:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:20.616 [2024-07-24 18:13:29.176734] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.616 [2024-07-24 18:13:29.176763] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.616 [2024-07-24 18:13:29.178738] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.616 [2024-07-24 18:13:29.178760] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.616 [2024-07-24 18:13:29.178780] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.616 [2024-07-24 18:13:29.178787] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x270e790 name raid_bdev1, state offline 00:10:20.616 0 00:10:20.616 18:13:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2155956 00:10:20.616 18:13:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2155956 ']' 00:10:20.616 18:13:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2155956 00:10:20.616 18:13:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:20.616 18:13:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.616 18:13:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2155956 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2155956' 00:10:20.875 killing process with pid 2155956 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2155956 00:10:20.875 [2024-07-24 18:13:29.247748] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2155956 00:10:20.875 [2024-07-24 18:13:29.257348] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.wNq7tY2uej 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:10:20.875 00:10:20.875 real 0m4.887s 00:10:20.875 user 0m7.362s 00:10:20.875 sys 0m0.836s 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.875 18:13:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.875 ************************************ 00:10:20.875 END TEST raid_write_error_test 00:10:20.875 ************************************ 00:10:21.132 18:13:29 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:21.132 18:13:29 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:21.132 18:13:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:21.132 18:13:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.133 18:13:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.133 ************************************ 00:10:21.133 START TEST raid_state_function_test 00:10:21.133 ************************************ 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2156874 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2156874' 00:10:21.133 Process raid pid: 2156874 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2156874 /var/tmp/spdk-raid.sock 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2156874 ']' 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:21.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.133 18:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.133 [2024-07-24 18:13:29.582843] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:10:21.133 [2024-07-24 18:13:29.582885] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:01.0 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:01.1 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:01.2 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:01.3 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:01.4 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:01.5 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:01.6 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:01.7 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:02.0 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:02.1 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:02.2 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:02.3 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:02.4 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:02.5 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:02.6 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b3:02.7 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b5:01.0 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b5:01.1 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b5:01.2 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b5:01.3 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b5:01.4 cannot be used 00:10:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.133 EAL: Requested device 0000:b5:01.5 cannot be used 00:10:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.134 EAL: Requested device 0000:b5:01.6 cannot be used 00:10:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.134 EAL: Requested device 0000:b5:01.7 cannot be used 00:10:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.134 EAL: Requested device 0000:b5:02.0 cannot be used 00:10:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.134 EAL: Requested device 0000:b5:02.1 cannot be used 00:10:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.134 EAL: Requested device 0000:b5:02.2 cannot be used 00:10:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.134 EAL: Requested device 0000:b5:02.3 cannot be used 00:10:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.134 EAL: Requested device 0000:b5:02.4 cannot be used 00:10:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.134 EAL: Requested device 0000:b5:02.5 cannot be used 00:10:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.134 EAL: Requested device 0000:b5:02.6 cannot be used 00:10:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.134 EAL: Requested device 0000:b5:02.7 cannot be used 00:10:21.134 [2024-07-24 18:13:29.675046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.392 [2024-07-24 18:13:29.744344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.392 [2024-07-24 18:13:29.793171] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.392 [2024-07-24 18:13:29.793195] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.958 18:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.958 18:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:21.958 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:21.958 [2024-07-24 18:13:30.536075] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.958 [2024-07-24 18:13:30.536107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.958 [2024-07-24 18:13:30.536114] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.958 [2024-07-24 18:13:30.536122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:22.216 "name": "Existed_Raid", 00:10:22.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.216 "strip_size_kb": 64, 00:10:22.216 "state": "configuring", 00:10:22.216 "raid_level": "concat", 00:10:22.216 "superblock": false, 00:10:22.216 "num_base_bdevs": 2, 00:10:22.216 "num_base_bdevs_discovered": 0, 00:10:22.216 "num_base_bdevs_operational": 2, 00:10:22.216 "base_bdevs_list": [ 00:10:22.216 { 00:10:22.216 "name": "BaseBdev1", 00:10:22.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.216 "is_configured": false, 00:10:22.216 "data_offset": 0, 00:10:22.216 "data_size": 0 00:10:22.216 }, 00:10:22.216 { 00:10:22.216 "name": "BaseBdev2", 00:10:22.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.216 "is_configured": false, 00:10:22.216 "data_offset": 0, 00:10:22.216 "data_size": 0 00:10:22.216 } 00:10:22.216 ] 00:10:22.216 }' 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:22.216 18:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.783 18:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:22.783 [2024-07-24 18:13:31.374155] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.783 [2024-07-24 18:13:31.374177] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11921a0 name Existed_Raid, state configuring 00:10:23.041 18:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:23.041 [2024-07-24 18:13:31.530571] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.041 [2024-07-24 18:13:31.530590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.041 [2024-07-24 18:13:31.530597] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.041 [2024-07-24 18:13:31.530604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.041 18:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.298 [2024-07-24 18:13:31.695622] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.298 BaseBdev1 00:10:23.298 18:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:23.298 18:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:23.298 18:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.298 18:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:23.298 18:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.298 18:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.298 18:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:23.298 18:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:23.557 [ 00:10:23.557 { 00:10:23.557 "name": "BaseBdev1", 00:10:23.557 "aliases": [ 00:10:23.557 "a0621238-ce71-48bd-a31d-b011b093487f" 00:10:23.557 ], 00:10:23.557 "product_name": "Malloc disk", 00:10:23.557 "block_size": 512, 00:10:23.557 "num_blocks": 65536, 00:10:23.557 "uuid": "a0621238-ce71-48bd-a31d-b011b093487f", 00:10:23.557 "assigned_rate_limits": { 00:10:23.557 "rw_ios_per_sec": 0, 00:10:23.557 "rw_mbytes_per_sec": 0, 00:10:23.557 "r_mbytes_per_sec": 0, 00:10:23.557 "w_mbytes_per_sec": 0 00:10:23.557 }, 00:10:23.557 "claimed": true, 00:10:23.557 "claim_type": "exclusive_write", 00:10:23.557 "zoned": false, 00:10:23.557 "supported_io_types": { 00:10:23.557 "read": true, 00:10:23.557 "write": true, 00:10:23.557 "unmap": true, 00:10:23.557 "flush": true, 00:10:23.557 "reset": true, 00:10:23.557 "nvme_admin": false, 00:10:23.557 "nvme_io": false, 00:10:23.557 "nvme_io_md": false, 00:10:23.557 "write_zeroes": true, 00:10:23.557 "zcopy": true, 00:10:23.557 "get_zone_info": false, 00:10:23.557 "zone_management": false, 00:10:23.557 "zone_append": false, 00:10:23.557 "compare": false, 00:10:23.557 "compare_and_write": false, 00:10:23.557 "abort": true, 00:10:23.557 "seek_hole": false, 00:10:23.557 "seek_data": false, 00:10:23.557 "copy": true, 00:10:23.557 "nvme_iov_md": false 00:10:23.557 }, 00:10:23.557 "memory_domains": [ 00:10:23.557 { 00:10:23.557 "dma_device_id": "system", 00:10:23.557 "dma_device_type": 1 00:10:23.557 }, 00:10:23.557 { 00:10:23.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.557 "dma_device_type": 2 00:10:23.557 } 00:10:23.557 ], 00:10:23.557 "driver_specific": {} 00:10:23.557 } 00:10:23.557 ] 00:10:23.557 18:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:23.557 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:23.557 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:23.557 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:23.557 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:23.557 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:23.557 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:23.557 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:23.557 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:23.557 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:23.557 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:23.557 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.557 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.816 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:23.816 "name": "Existed_Raid", 00:10:23.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.816 "strip_size_kb": 64, 00:10:23.816 "state": "configuring", 00:10:23.816 "raid_level": "concat", 00:10:23.816 "superblock": false, 00:10:23.816 "num_base_bdevs": 2, 00:10:23.816 "num_base_bdevs_discovered": 1, 00:10:23.816 "num_base_bdevs_operational": 2, 00:10:23.816 "base_bdevs_list": [ 00:10:23.816 { 00:10:23.816 "name": "BaseBdev1", 00:10:23.816 "uuid": "a0621238-ce71-48bd-a31d-b011b093487f", 00:10:23.816 "is_configured": true, 00:10:23.816 "data_offset": 0, 00:10:23.816 "data_size": 65536 00:10:23.816 }, 00:10:23.816 { 00:10:23.816 "name": "BaseBdev2", 00:10:23.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.816 "is_configured": false, 00:10:23.816 "data_offset": 0, 00:10:23.816 "data_size": 0 00:10:23.816 } 00:10:23.816 ] 00:10:23.816 }' 00:10:23.816 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:23.816 18:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.382 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:24.382 [2024-07-24 18:13:32.850594] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.382 [2024-07-24 18:13:32.850624] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1191a90 name Existed_Raid, state configuring 00:10:24.382 18:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:24.641 [2024-07-24 18:13:33.011041] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.641 [2024-07-24 18:13:33.012121] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.641 [2024-07-24 18:13:33.012152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:24.641 "name": "Existed_Raid", 00:10:24.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.641 "strip_size_kb": 64, 00:10:24.641 "state": "configuring", 00:10:24.641 "raid_level": "concat", 00:10:24.641 "superblock": false, 00:10:24.641 "num_base_bdevs": 2, 00:10:24.641 "num_base_bdevs_discovered": 1, 00:10:24.641 "num_base_bdevs_operational": 2, 00:10:24.641 "base_bdevs_list": [ 00:10:24.641 { 00:10:24.641 "name": "BaseBdev1", 00:10:24.641 "uuid": "a0621238-ce71-48bd-a31d-b011b093487f", 00:10:24.641 "is_configured": true, 00:10:24.641 "data_offset": 0, 00:10:24.641 "data_size": 65536 00:10:24.641 }, 00:10:24.641 { 00:10:24.641 "name": "BaseBdev2", 00:10:24.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.641 "is_configured": false, 00:10:24.641 "data_offset": 0, 00:10:24.641 "data_size": 0 00:10:24.641 } 00:10:24.641 ] 00:10:24.641 }' 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:24.641 18:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.206 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.464 [2024-07-24 18:13:33.872099] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.464 [2024-07-24 18:13:33.872130] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1192880 00:10:25.464 [2024-07-24 18:13:33.872135] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:25.464 [2024-07-24 18:13:33.872263] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13459a0 00:10:25.464 [2024-07-24 18:13:33.872346] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1192880 00:10:25.464 [2024-07-24 18:13:33.872353] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1192880 00:10:25.464 [2024-07-24 18:13:33.872473] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.464 BaseBdev2 00:10:25.464 18:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:25.464 18:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:25.464 18:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.464 18:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:25.464 18:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.464 18:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.464 18:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:25.464 18:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.722 [ 00:10:25.722 { 00:10:25.723 "name": "BaseBdev2", 00:10:25.723 "aliases": [ 00:10:25.723 "b9fb3aaa-26f7-443d-8fbc-c2f0074b4646" 00:10:25.723 ], 00:10:25.723 "product_name": "Malloc disk", 00:10:25.723 "block_size": 512, 00:10:25.723 "num_blocks": 65536, 00:10:25.723 "uuid": "b9fb3aaa-26f7-443d-8fbc-c2f0074b4646", 00:10:25.723 "assigned_rate_limits": { 00:10:25.723 "rw_ios_per_sec": 0, 00:10:25.723 "rw_mbytes_per_sec": 0, 00:10:25.723 "r_mbytes_per_sec": 0, 00:10:25.723 "w_mbytes_per_sec": 0 00:10:25.723 }, 00:10:25.723 "claimed": true, 00:10:25.723 "claim_type": "exclusive_write", 00:10:25.723 "zoned": false, 00:10:25.723 "supported_io_types": { 00:10:25.723 "read": true, 00:10:25.723 "write": true, 00:10:25.723 "unmap": true, 00:10:25.723 "flush": true, 00:10:25.723 "reset": true, 00:10:25.723 "nvme_admin": false, 00:10:25.723 "nvme_io": false, 00:10:25.723 "nvme_io_md": false, 00:10:25.723 "write_zeroes": true, 00:10:25.723 "zcopy": true, 00:10:25.723 "get_zone_info": false, 00:10:25.723 "zone_management": false, 00:10:25.723 "zone_append": false, 00:10:25.723 "compare": false, 00:10:25.723 "compare_and_write": false, 00:10:25.723 "abort": true, 00:10:25.723 "seek_hole": false, 00:10:25.723 "seek_data": false, 00:10:25.723 "copy": true, 00:10:25.723 "nvme_iov_md": false 00:10:25.723 }, 00:10:25.723 "memory_domains": [ 00:10:25.723 { 00:10:25.723 "dma_device_id": "system", 00:10:25.723 "dma_device_type": 1 00:10:25.723 }, 00:10:25.723 { 00:10:25.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.723 "dma_device_type": 2 00:10:25.723 } 00:10:25.723 ], 00:10:25.723 "driver_specific": {} 00:10:25.723 } 00:10:25.723 ] 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.723 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.980 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:25.980 "name": "Existed_Raid", 00:10:25.980 "uuid": "222cb3cf-d6f8-4698-823c-9511b095d91a", 00:10:25.980 "strip_size_kb": 64, 00:10:25.980 "state": "online", 00:10:25.980 "raid_level": "concat", 00:10:25.980 "superblock": false, 00:10:25.980 "num_base_bdevs": 2, 00:10:25.980 "num_base_bdevs_discovered": 2, 00:10:25.980 "num_base_bdevs_operational": 2, 00:10:25.980 "base_bdevs_list": [ 00:10:25.980 { 00:10:25.980 "name": "BaseBdev1", 00:10:25.980 "uuid": "a0621238-ce71-48bd-a31d-b011b093487f", 00:10:25.980 "is_configured": true, 00:10:25.980 "data_offset": 0, 00:10:25.980 "data_size": 65536 00:10:25.980 }, 00:10:25.980 { 00:10:25.980 "name": "BaseBdev2", 00:10:25.980 "uuid": "b9fb3aaa-26f7-443d-8fbc-c2f0074b4646", 00:10:25.980 "is_configured": true, 00:10:25.980 "data_offset": 0, 00:10:25.980 "data_size": 65536 00:10:25.980 } 00:10:25.980 ] 00:10:25.980 }' 00:10:25.980 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:25.980 18:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.545 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:26.545 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:26.545 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:26.545 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:26.545 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:26.545 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:26.545 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:26.545 18:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:26.545 [2024-07-24 18:13:35.015232] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.545 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:26.545 "name": "Existed_Raid", 00:10:26.545 "aliases": [ 00:10:26.545 "222cb3cf-d6f8-4698-823c-9511b095d91a" 00:10:26.545 ], 00:10:26.545 "product_name": "Raid Volume", 00:10:26.545 "block_size": 512, 00:10:26.545 "num_blocks": 131072, 00:10:26.545 "uuid": "222cb3cf-d6f8-4698-823c-9511b095d91a", 00:10:26.545 "assigned_rate_limits": { 00:10:26.545 "rw_ios_per_sec": 0, 00:10:26.545 "rw_mbytes_per_sec": 0, 00:10:26.545 "r_mbytes_per_sec": 0, 00:10:26.545 "w_mbytes_per_sec": 0 00:10:26.545 }, 00:10:26.545 "claimed": false, 00:10:26.545 "zoned": false, 00:10:26.545 "supported_io_types": { 00:10:26.545 "read": true, 00:10:26.545 "write": true, 00:10:26.545 "unmap": true, 00:10:26.545 "flush": true, 00:10:26.545 "reset": true, 00:10:26.545 "nvme_admin": false, 00:10:26.545 "nvme_io": false, 00:10:26.545 "nvme_io_md": false, 00:10:26.545 "write_zeroes": true, 00:10:26.545 "zcopy": false, 00:10:26.545 "get_zone_info": false, 00:10:26.545 "zone_management": false, 00:10:26.545 "zone_append": false, 00:10:26.545 "compare": false, 00:10:26.545 "compare_and_write": false, 00:10:26.545 "abort": false, 00:10:26.545 "seek_hole": false, 00:10:26.545 "seek_data": false, 00:10:26.545 "copy": false, 00:10:26.545 "nvme_iov_md": false 00:10:26.545 }, 00:10:26.545 "memory_domains": [ 00:10:26.545 { 00:10:26.545 "dma_device_id": "system", 00:10:26.545 "dma_device_type": 1 00:10:26.545 }, 00:10:26.545 { 00:10:26.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.545 "dma_device_type": 2 00:10:26.545 }, 00:10:26.545 { 00:10:26.545 "dma_device_id": "system", 00:10:26.546 "dma_device_type": 1 00:10:26.546 }, 00:10:26.546 { 00:10:26.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.546 "dma_device_type": 2 00:10:26.546 } 00:10:26.546 ], 00:10:26.546 "driver_specific": { 00:10:26.546 "raid": { 00:10:26.546 "uuid": "222cb3cf-d6f8-4698-823c-9511b095d91a", 00:10:26.546 "strip_size_kb": 64, 00:10:26.546 "state": "online", 00:10:26.546 "raid_level": "concat", 00:10:26.546 "superblock": false, 00:10:26.546 "num_base_bdevs": 2, 00:10:26.546 "num_base_bdevs_discovered": 2, 00:10:26.546 "num_base_bdevs_operational": 2, 00:10:26.546 "base_bdevs_list": [ 00:10:26.546 { 00:10:26.546 "name": "BaseBdev1", 00:10:26.546 "uuid": "a0621238-ce71-48bd-a31d-b011b093487f", 00:10:26.546 "is_configured": true, 00:10:26.546 "data_offset": 0, 00:10:26.546 "data_size": 65536 00:10:26.546 }, 00:10:26.546 { 00:10:26.546 "name": "BaseBdev2", 00:10:26.546 "uuid": "b9fb3aaa-26f7-443d-8fbc-c2f0074b4646", 00:10:26.546 "is_configured": true, 00:10:26.546 "data_offset": 0, 00:10:26.546 "data_size": 65536 00:10:26.546 } 00:10:26.546 ] 00:10:26.546 } 00:10:26.546 } 00:10:26.546 }' 00:10:26.546 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.546 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:26.546 BaseBdev2' 00:10:26.546 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:26.546 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:26.546 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:26.803 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:26.803 "name": "BaseBdev1", 00:10:26.803 "aliases": [ 00:10:26.803 "a0621238-ce71-48bd-a31d-b011b093487f" 00:10:26.803 ], 00:10:26.803 "product_name": "Malloc disk", 00:10:26.803 "block_size": 512, 00:10:26.803 "num_blocks": 65536, 00:10:26.803 "uuid": "a0621238-ce71-48bd-a31d-b011b093487f", 00:10:26.803 "assigned_rate_limits": { 00:10:26.803 "rw_ios_per_sec": 0, 00:10:26.803 "rw_mbytes_per_sec": 0, 00:10:26.803 "r_mbytes_per_sec": 0, 00:10:26.803 "w_mbytes_per_sec": 0 00:10:26.803 }, 00:10:26.803 "claimed": true, 00:10:26.803 "claim_type": "exclusive_write", 00:10:26.803 "zoned": false, 00:10:26.803 "supported_io_types": { 00:10:26.803 "read": true, 00:10:26.803 "write": true, 00:10:26.803 "unmap": true, 00:10:26.803 "flush": true, 00:10:26.803 "reset": true, 00:10:26.803 "nvme_admin": false, 00:10:26.803 "nvme_io": false, 00:10:26.803 "nvme_io_md": false, 00:10:26.803 "write_zeroes": true, 00:10:26.803 "zcopy": true, 00:10:26.803 "get_zone_info": false, 00:10:26.803 "zone_management": false, 00:10:26.803 "zone_append": false, 00:10:26.803 "compare": false, 00:10:26.803 "compare_and_write": false, 00:10:26.803 "abort": true, 00:10:26.803 "seek_hole": false, 00:10:26.803 "seek_data": false, 00:10:26.803 "copy": true, 00:10:26.803 "nvme_iov_md": false 00:10:26.803 }, 00:10:26.803 "memory_domains": [ 00:10:26.803 { 00:10:26.803 "dma_device_id": "system", 00:10:26.803 "dma_device_type": 1 00:10:26.803 }, 00:10:26.803 { 00:10:26.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.803 "dma_device_type": 2 00:10:26.803 } 00:10:26.803 ], 00:10:26.803 "driver_specific": {} 00:10:26.803 }' 00:10:26.803 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:26.803 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:26.803 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:26.803 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:26.803 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:27.062 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:27.062 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:27.062 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:27.062 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:27.062 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:27.062 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:27.062 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:27.062 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:27.062 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:27.062 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:27.321 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:27.321 "name": "BaseBdev2", 00:10:27.321 "aliases": [ 00:10:27.321 "b9fb3aaa-26f7-443d-8fbc-c2f0074b4646" 00:10:27.321 ], 00:10:27.321 "product_name": "Malloc disk", 00:10:27.321 "block_size": 512, 00:10:27.321 "num_blocks": 65536, 00:10:27.321 "uuid": "b9fb3aaa-26f7-443d-8fbc-c2f0074b4646", 00:10:27.321 "assigned_rate_limits": { 00:10:27.321 "rw_ios_per_sec": 0, 00:10:27.321 "rw_mbytes_per_sec": 0, 00:10:27.321 "r_mbytes_per_sec": 0, 00:10:27.321 "w_mbytes_per_sec": 0 00:10:27.321 }, 00:10:27.321 "claimed": true, 00:10:27.321 "claim_type": "exclusive_write", 00:10:27.321 "zoned": false, 00:10:27.321 "supported_io_types": { 00:10:27.321 "read": true, 00:10:27.321 "write": true, 00:10:27.321 "unmap": true, 00:10:27.321 "flush": true, 00:10:27.321 "reset": true, 00:10:27.321 "nvme_admin": false, 00:10:27.321 "nvme_io": false, 00:10:27.321 "nvme_io_md": false, 00:10:27.321 "write_zeroes": true, 00:10:27.321 "zcopy": true, 00:10:27.321 "get_zone_info": false, 00:10:27.321 "zone_management": false, 00:10:27.321 "zone_append": false, 00:10:27.321 "compare": false, 00:10:27.321 "compare_and_write": false, 00:10:27.321 "abort": true, 00:10:27.321 "seek_hole": false, 00:10:27.321 "seek_data": false, 00:10:27.321 "copy": true, 00:10:27.321 "nvme_iov_md": false 00:10:27.321 }, 00:10:27.321 "memory_domains": [ 00:10:27.321 { 00:10:27.321 "dma_device_id": "system", 00:10:27.321 "dma_device_type": 1 00:10:27.321 }, 00:10:27.321 { 00:10:27.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.321 "dma_device_type": 2 00:10:27.321 } 00:10:27.321 ], 00:10:27.321 "driver_specific": {} 00:10:27.321 }' 00:10:27.321 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:27.321 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:27.321 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:27.321 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:27.321 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:27.321 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:27.321 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:27.580 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:27.580 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:27.580 18:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:27.580 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:27.580 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:27.580 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:27.839 [2024-07-24 18:13:36.210153] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.839 [2024-07-24 18:13:36.210173] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.839 [2024-07-24 18:13:36.210202] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.839 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:27.839 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:27.839 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:27.839 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:27.839 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:27.839 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:27.839 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:27.839 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:27.839 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:27.839 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:27.839 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:27.839 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:27.839 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:27.840 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:27.840 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:27.840 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.840 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.840 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.840 "name": "Existed_Raid", 00:10:27.840 "uuid": "222cb3cf-d6f8-4698-823c-9511b095d91a", 00:10:27.840 "strip_size_kb": 64, 00:10:27.840 "state": "offline", 00:10:27.840 "raid_level": "concat", 00:10:27.840 "superblock": false, 00:10:27.840 "num_base_bdevs": 2, 00:10:27.840 "num_base_bdevs_discovered": 1, 00:10:27.840 "num_base_bdevs_operational": 1, 00:10:27.840 "base_bdevs_list": [ 00:10:27.840 { 00:10:27.840 "name": null, 00:10:27.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.840 "is_configured": false, 00:10:27.840 "data_offset": 0, 00:10:27.840 "data_size": 65536 00:10:27.840 }, 00:10:27.840 { 00:10:27.840 "name": "BaseBdev2", 00:10:27.840 "uuid": "b9fb3aaa-26f7-443d-8fbc-c2f0074b4646", 00:10:27.840 "is_configured": true, 00:10:27.840 "data_offset": 0, 00:10:27.840 "data_size": 65536 00:10:27.840 } 00:10:27.840 ] 00:10:27.840 }' 00:10:27.840 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.840 18:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.407 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:28.407 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:28.407 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.407 18:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:28.699 18:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:28.699 18:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.699 18:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:28.699 [2024-07-24 18:13:37.181455] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.699 [2024-07-24 18:13:37.181495] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1192880 name Existed_Raid, state offline 00:10:28.699 18:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:28.699 18:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:28.699 18:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.699 18:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2156874 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2156874 ']' 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2156874 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2156874 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2156874' 00:10:28.958 killing process with pid 2156874 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2156874 00:10:28.958 [2024-07-24 18:13:37.419413] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.958 18:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2156874 00:10:28.958 [2024-07-24 18:13:37.420215] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:29.218 00:10:29.218 real 0m8.072s 00:10:29.218 user 0m14.187s 00:10:29.218 sys 0m1.567s 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.218 ************************************ 00:10:29.218 END TEST raid_state_function_test 00:10:29.218 ************************************ 00:10:29.218 18:13:37 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:29.218 18:13:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:29.218 18:13:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.218 18:13:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.218 ************************************ 00:10:29.218 START TEST raid_state_function_test_sb 00:10:29.218 ************************************ 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2158449 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2158449' 00:10:29.218 Process raid pid: 2158449 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2158449 /var/tmp/spdk-raid.sock 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2158449 ']' 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:29.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.218 18:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.218 [2024-07-24 18:13:37.721430] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:10:29.218 [2024-07-24 18:13:37.721474] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:01.0 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:01.1 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:01.2 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:01.3 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:01.4 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:01.5 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:01.6 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:01.7 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:02.0 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:02.1 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:02.2 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:02.3 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:02.4 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:02.5 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:02.6 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b3:02.7 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b5:01.0 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b5:01.1 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b5:01.2 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b5:01.3 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b5:01.4 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b5:01.5 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b5:01.6 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b5:01.7 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b5:02.0 cannot be used 00:10:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.218 EAL: Requested device 0000:b5:02.1 cannot be used 00:10:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.219 EAL: Requested device 0000:b5:02.2 cannot be used 00:10:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.219 EAL: Requested device 0000:b5:02.3 cannot be used 00:10:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.219 EAL: Requested device 0000:b5:02.4 cannot be used 00:10:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.219 EAL: Requested device 0000:b5:02.5 cannot be used 00:10:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.219 EAL: Requested device 0000:b5:02.6 cannot be used 00:10:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.219 EAL: Requested device 0000:b5:02.7 cannot be used 00:10:29.478 [2024-07-24 18:13:37.814978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.478 [2024-07-24 18:13:37.888208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.478 [2024-07-24 18:13:37.941309] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.478 [2024-07-24 18:13:37.941334] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.046 18:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.046 18:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:30.046 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:30.305 [2024-07-24 18:13:38.668486] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.305 [2024-07-24 18:13:38.668515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.305 [2024-07-24 18:13:38.668522] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.305 [2024-07-24 18:13:38.668530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:30.305 "name": "Existed_Raid", 00:10:30.305 "uuid": "a9fdd4ac-5b7c-4086-bf43-195970674af2", 00:10:30.305 "strip_size_kb": 64, 00:10:30.305 "state": "configuring", 00:10:30.305 "raid_level": "concat", 00:10:30.305 "superblock": true, 00:10:30.305 "num_base_bdevs": 2, 00:10:30.305 "num_base_bdevs_discovered": 0, 00:10:30.305 "num_base_bdevs_operational": 2, 00:10:30.305 "base_bdevs_list": [ 00:10:30.305 { 00:10:30.305 "name": "BaseBdev1", 00:10:30.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.305 "is_configured": false, 00:10:30.305 "data_offset": 0, 00:10:30.305 "data_size": 0 00:10:30.305 }, 00:10:30.305 { 00:10:30.305 "name": "BaseBdev2", 00:10:30.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.305 "is_configured": false, 00:10:30.305 "data_offset": 0, 00:10:30.305 "data_size": 0 00:10:30.305 } 00:10:30.305 ] 00:10:30.305 }' 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:30.305 18:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.873 18:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:31.132 [2024-07-24 18:13:39.482509] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.132 [2024-07-24 18:13:39.482526] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fb01a0 name Existed_Raid, state configuring 00:10:31.132 18:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:31.132 [2024-07-24 18:13:39.658977] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.132 [2024-07-24 18:13:39.658994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.132 [2024-07-24 18:13:39.658999] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.132 [2024-07-24 18:13:39.659007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.132 18:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.391 [2024-07-24 18:13:39.843811] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.391 BaseBdev1 00:10:31.391 18:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:31.391 18:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:31.391 18:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.391 18:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.391 18:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.391 18:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.391 18:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:31.650 18:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.650 [ 00:10:31.650 { 00:10:31.650 "name": "BaseBdev1", 00:10:31.650 "aliases": [ 00:10:31.650 "5874ebe9-c215-44ca-8f37-8032df3de485" 00:10:31.650 ], 00:10:31.650 "product_name": "Malloc disk", 00:10:31.650 "block_size": 512, 00:10:31.650 "num_blocks": 65536, 00:10:31.650 "uuid": "5874ebe9-c215-44ca-8f37-8032df3de485", 00:10:31.650 "assigned_rate_limits": { 00:10:31.650 "rw_ios_per_sec": 0, 00:10:31.650 "rw_mbytes_per_sec": 0, 00:10:31.650 "r_mbytes_per_sec": 0, 00:10:31.650 "w_mbytes_per_sec": 0 00:10:31.650 }, 00:10:31.650 "claimed": true, 00:10:31.650 "claim_type": "exclusive_write", 00:10:31.650 "zoned": false, 00:10:31.650 "supported_io_types": { 00:10:31.650 "read": true, 00:10:31.650 "write": true, 00:10:31.650 "unmap": true, 00:10:31.650 "flush": true, 00:10:31.650 "reset": true, 00:10:31.650 "nvme_admin": false, 00:10:31.650 "nvme_io": false, 00:10:31.650 "nvme_io_md": false, 00:10:31.650 "write_zeroes": true, 00:10:31.650 "zcopy": true, 00:10:31.650 "get_zone_info": false, 00:10:31.650 "zone_management": false, 00:10:31.650 "zone_append": false, 00:10:31.650 "compare": false, 00:10:31.650 "compare_and_write": false, 00:10:31.650 "abort": true, 00:10:31.650 "seek_hole": false, 00:10:31.650 "seek_data": false, 00:10:31.651 "copy": true, 00:10:31.651 "nvme_iov_md": false 00:10:31.651 }, 00:10:31.651 "memory_domains": [ 00:10:31.651 { 00:10:31.651 "dma_device_id": "system", 00:10:31.651 "dma_device_type": 1 00:10:31.651 }, 00:10:31.651 { 00:10:31.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.651 "dma_device_type": 2 00:10:31.651 } 00:10:31.651 ], 00:10:31.651 "driver_specific": {} 00:10:31.651 } 00:10:31.651 ] 00:10:31.651 18:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.651 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:31.651 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:31.651 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:31.651 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:31.651 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:31.651 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:31.651 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:31.651 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:31.651 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:31.651 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:31.651 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.651 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.910 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:31.910 "name": "Existed_Raid", 00:10:31.910 "uuid": "29079fb1-01e0-4ebc-8d7f-d8487402a8bd", 00:10:31.910 "strip_size_kb": 64, 00:10:31.910 "state": "configuring", 00:10:31.910 "raid_level": "concat", 00:10:31.910 "superblock": true, 00:10:31.910 "num_base_bdevs": 2, 00:10:31.910 "num_base_bdevs_discovered": 1, 00:10:31.910 "num_base_bdevs_operational": 2, 00:10:31.910 "base_bdevs_list": [ 00:10:31.910 { 00:10:31.910 "name": "BaseBdev1", 00:10:31.910 "uuid": "5874ebe9-c215-44ca-8f37-8032df3de485", 00:10:31.910 "is_configured": true, 00:10:31.910 "data_offset": 2048, 00:10:31.910 "data_size": 63488 00:10:31.910 }, 00:10:31.910 { 00:10:31.910 "name": "BaseBdev2", 00:10:31.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.910 "is_configured": false, 00:10:31.910 "data_offset": 0, 00:10:31.910 "data_size": 0 00:10:31.910 } 00:10:31.910 ] 00:10:31.910 }' 00:10:31.910 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:31.910 18:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.479 18:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:32.479 [2024-07-24 18:13:41.006803] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.479 [2024-07-24 18:13:41.006832] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fafa90 name Existed_Raid, state configuring 00:10:32.479 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:32.738 [2024-07-24 18:13:41.171253] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.738 [2024-07-24 18:13:41.172302] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.738 [2024-07-24 18:13:41.172327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.738 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.997 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:32.997 "name": "Existed_Raid", 00:10:32.997 "uuid": "27dbc711-5f96-48e2-82e9-584afc2cedbe", 00:10:32.997 "strip_size_kb": 64, 00:10:32.997 "state": "configuring", 00:10:32.997 "raid_level": "concat", 00:10:32.997 "superblock": true, 00:10:32.997 "num_base_bdevs": 2, 00:10:32.997 "num_base_bdevs_discovered": 1, 00:10:32.997 "num_base_bdevs_operational": 2, 00:10:32.997 "base_bdevs_list": [ 00:10:32.997 { 00:10:32.997 "name": "BaseBdev1", 00:10:32.997 "uuid": "5874ebe9-c215-44ca-8f37-8032df3de485", 00:10:32.997 "is_configured": true, 00:10:32.997 "data_offset": 2048, 00:10:32.997 "data_size": 63488 00:10:32.997 }, 00:10:32.997 { 00:10:32.997 "name": "BaseBdev2", 00:10:32.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.997 "is_configured": false, 00:10:32.997 "data_offset": 0, 00:10:32.997 "data_size": 0 00:10:32.997 } 00:10:32.997 ] 00:10:32.997 }' 00:10:32.997 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:32.997 18:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.564 18:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:33.564 [2024-07-24 18:13:42.020206] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.564 [2024-07-24 18:13:42.020309] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fb0880 00:10:33.564 [2024-07-24 18:13:42.020321] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:33.564 [2024-07-24 18:13:42.020441] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21639a0 00:10:33.564 [2024-07-24 18:13:42.020524] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fb0880 00:10:33.565 [2024-07-24 18:13:42.020530] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1fb0880 00:10:33.565 [2024-07-24 18:13:42.020598] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.565 BaseBdev2 00:10:33.565 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:33.565 18:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:33.565 18:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.565 18:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:33.565 18:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.565 18:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.565 18:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:33.824 [ 00:10:33.824 { 00:10:33.824 "name": "BaseBdev2", 00:10:33.824 "aliases": [ 00:10:33.824 "d863041a-11e4-4e09-9b7a-4c2382a94957" 00:10:33.824 ], 00:10:33.824 "product_name": "Malloc disk", 00:10:33.824 "block_size": 512, 00:10:33.824 "num_blocks": 65536, 00:10:33.824 "uuid": "d863041a-11e4-4e09-9b7a-4c2382a94957", 00:10:33.824 "assigned_rate_limits": { 00:10:33.824 "rw_ios_per_sec": 0, 00:10:33.824 "rw_mbytes_per_sec": 0, 00:10:33.824 "r_mbytes_per_sec": 0, 00:10:33.824 "w_mbytes_per_sec": 0 00:10:33.824 }, 00:10:33.824 "claimed": true, 00:10:33.824 "claim_type": "exclusive_write", 00:10:33.824 "zoned": false, 00:10:33.824 "supported_io_types": { 00:10:33.824 "read": true, 00:10:33.824 "write": true, 00:10:33.824 "unmap": true, 00:10:33.824 "flush": true, 00:10:33.824 "reset": true, 00:10:33.824 "nvme_admin": false, 00:10:33.824 "nvme_io": false, 00:10:33.824 "nvme_io_md": false, 00:10:33.824 "write_zeroes": true, 00:10:33.824 "zcopy": true, 00:10:33.824 "get_zone_info": false, 00:10:33.824 "zone_management": false, 00:10:33.824 "zone_append": false, 00:10:33.824 "compare": false, 00:10:33.824 "compare_and_write": false, 00:10:33.824 "abort": true, 00:10:33.824 "seek_hole": false, 00:10:33.824 "seek_data": false, 00:10:33.824 "copy": true, 00:10:33.824 "nvme_iov_md": false 00:10:33.824 }, 00:10:33.824 "memory_domains": [ 00:10:33.824 { 00:10:33.824 "dma_device_id": "system", 00:10:33.824 "dma_device_type": 1 00:10:33.824 }, 00:10:33.824 { 00:10:33.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.824 "dma_device_type": 2 00:10:33.824 } 00:10:33.824 ], 00:10:33.824 "driver_specific": {} 00:10:33.824 } 00:10:33.824 ] 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.824 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.083 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:34.083 "name": "Existed_Raid", 00:10:34.083 "uuid": "27dbc711-5f96-48e2-82e9-584afc2cedbe", 00:10:34.083 "strip_size_kb": 64, 00:10:34.083 "state": "online", 00:10:34.083 "raid_level": "concat", 00:10:34.083 "superblock": true, 00:10:34.083 "num_base_bdevs": 2, 00:10:34.083 "num_base_bdevs_discovered": 2, 00:10:34.083 "num_base_bdevs_operational": 2, 00:10:34.083 "base_bdevs_list": [ 00:10:34.083 { 00:10:34.083 "name": "BaseBdev1", 00:10:34.083 "uuid": "5874ebe9-c215-44ca-8f37-8032df3de485", 00:10:34.083 "is_configured": true, 00:10:34.083 "data_offset": 2048, 00:10:34.083 "data_size": 63488 00:10:34.083 }, 00:10:34.083 { 00:10:34.083 "name": "BaseBdev2", 00:10:34.083 "uuid": "d863041a-11e4-4e09-9b7a-4c2382a94957", 00:10:34.083 "is_configured": true, 00:10:34.083 "data_offset": 2048, 00:10:34.083 "data_size": 63488 00:10:34.083 } 00:10:34.083 ] 00:10:34.083 }' 00:10:34.083 18:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:34.083 18:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:34.651 [2024-07-24 18:13:43.167323] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:34.651 "name": "Existed_Raid", 00:10:34.651 "aliases": [ 00:10:34.651 "27dbc711-5f96-48e2-82e9-584afc2cedbe" 00:10:34.651 ], 00:10:34.651 "product_name": "Raid Volume", 00:10:34.651 "block_size": 512, 00:10:34.651 "num_blocks": 126976, 00:10:34.651 "uuid": "27dbc711-5f96-48e2-82e9-584afc2cedbe", 00:10:34.651 "assigned_rate_limits": { 00:10:34.651 "rw_ios_per_sec": 0, 00:10:34.651 "rw_mbytes_per_sec": 0, 00:10:34.651 "r_mbytes_per_sec": 0, 00:10:34.651 "w_mbytes_per_sec": 0 00:10:34.651 }, 00:10:34.651 "claimed": false, 00:10:34.651 "zoned": false, 00:10:34.651 "supported_io_types": { 00:10:34.651 "read": true, 00:10:34.651 "write": true, 00:10:34.651 "unmap": true, 00:10:34.651 "flush": true, 00:10:34.651 "reset": true, 00:10:34.651 "nvme_admin": false, 00:10:34.651 "nvme_io": false, 00:10:34.651 "nvme_io_md": false, 00:10:34.651 "write_zeroes": true, 00:10:34.651 "zcopy": false, 00:10:34.651 "get_zone_info": false, 00:10:34.651 "zone_management": false, 00:10:34.651 "zone_append": false, 00:10:34.651 "compare": false, 00:10:34.651 "compare_and_write": false, 00:10:34.651 "abort": false, 00:10:34.651 "seek_hole": false, 00:10:34.651 "seek_data": false, 00:10:34.651 "copy": false, 00:10:34.651 "nvme_iov_md": false 00:10:34.651 }, 00:10:34.651 "memory_domains": [ 00:10:34.651 { 00:10:34.651 "dma_device_id": "system", 00:10:34.651 "dma_device_type": 1 00:10:34.651 }, 00:10:34.651 { 00:10:34.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.651 "dma_device_type": 2 00:10:34.651 }, 00:10:34.651 { 00:10:34.651 "dma_device_id": "system", 00:10:34.651 "dma_device_type": 1 00:10:34.651 }, 00:10:34.651 { 00:10:34.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.651 "dma_device_type": 2 00:10:34.651 } 00:10:34.651 ], 00:10:34.651 "driver_specific": { 00:10:34.651 "raid": { 00:10:34.651 "uuid": "27dbc711-5f96-48e2-82e9-584afc2cedbe", 00:10:34.651 "strip_size_kb": 64, 00:10:34.651 "state": "online", 00:10:34.651 "raid_level": "concat", 00:10:34.651 "superblock": true, 00:10:34.651 "num_base_bdevs": 2, 00:10:34.651 "num_base_bdevs_discovered": 2, 00:10:34.651 "num_base_bdevs_operational": 2, 00:10:34.651 "base_bdevs_list": [ 00:10:34.651 { 00:10:34.651 "name": "BaseBdev1", 00:10:34.651 "uuid": "5874ebe9-c215-44ca-8f37-8032df3de485", 00:10:34.651 "is_configured": true, 00:10:34.651 "data_offset": 2048, 00:10:34.651 "data_size": 63488 00:10:34.651 }, 00:10:34.651 { 00:10:34.651 "name": "BaseBdev2", 00:10:34.651 "uuid": "d863041a-11e4-4e09-9b7a-4c2382a94957", 00:10:34.651 "is_configured": true, 00:10:34.651 "data_offset": 2048, 00:10:34.651 "data_size": 63488 00:10:34.651 } 00:10:34.651 ] 00:10:34.651 } 00:10:34.651 } 00:10:34.651 }' 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:34.651 BaseBdev2' 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:34.651 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:34.915 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:34.915 "name": "BaseBdev1", 00:10:34.915 "aliases": [ 00:10:34.915 "5874ebe9-c215-44ca-8f37-8032df3de485" 00:10:34.915 ], 00:10:34.915 "product_name": "Malloc disk", 00:10:34.915 "block_size": 512, 00:10:34.915 "num_blocks": 65536, 00:10:34.915 "uuid": "5874ebe9-c215-44ca-8f37-8032df3de485", 00:10:34.915 "assigned_rate_limits": { 00:10:34.915 "rw_ios_per_sec": 0, 00:10:34.915 "rw_mbytes_per_sec": 0, 00:10:34.915 "r_mbytes_per_sec": 0, 00:10:34.915 "w_mbytes_per_sec": 0 00:10:34.915 }, 00:10:34.915 "claimed": true, 00:10:34.915 "claim_type": "exclusive_write", 00:10:34.915 "zoned": false, 00:10:34.915 "supported_io_types": { 00:10:34.915 "read": true, 00:10:34.915 "write": true, 00:10:34.915 "unmap": true, 00:10:34.915 "flush": true, 00:10:34.915 "reset": true, 00:10:34.915 "nvme_admin": false, 00:10:34.915 "nvme_io": false, 00:10:34.915 "nvme_io_md": false, 00:10:34.915 "write_zeroes": true, 00:10:34.915 "zcopy": true, 00:10:34.915 "get_zone_info": false, 00:10:34.915 "zone_management": false, 00:10:34.915 "zone_append": false, 00:10:34.915 "compare": false, 00:10:34.915 "compare_and_write": false, 00:10:34.915 "abort": true, 00:10:34.915 "seek_hole": false, 00:10:34.915 "seek_data": false, 00:10:34.915 "copy": true, 00:10:34.915 "nvme_iov_md": false 00:10:34.915 }, 00:10:34.915 "memory_domains": [ 00:10:34.915 { 00:10:34.915 "dma_device_id": "system", 00:10:34.915 "dma_device_type": 1 00:10:34.915 }, 00:10:34.915 { 00:10:34.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.915 "dma_device_type": 2 00:10:34.915 } 00:10:34.915 ], 00:10:34.915 "driver_specific": {} 00:10:34.915 }' 00:10:34.915 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:34.915 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:34.915 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:34.915 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:35.181 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:35.181 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:35.181 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:35.181 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:35.181 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:35.181 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:35.181 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:35.181 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:35.181 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:35.181 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:35.181 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:35.440 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:35.440 "name": "BaseBdev2", 00:10:35.440 "aliases": [ 00:10:35.440 "d863041a-11e4-4e09-9b7a-4c2382a94957" 00:10:35.440 ], 00:10:35.440 "product_name": "Malloc disk", 00:10:35.440 "block_size": 512, 00:10:35.440 "num_blocks": 65536, 00:10:35.440 "uuid": "d863041a-11e4-4e09-9b7a-4c2382a94957", 00:10:35.440 "assigned_rate_limits": { 00:10:35.440 "rw_ios_per_sec": 0, 00:10:35.440 "rw_mbytes_per_sec": 0, 00:10:35.440 "r_mbytes_per_sec": 0, 00:10:35.440 "w_mbytes_per_sec": 0 00:10:35.440 }, 00:10:35.440 "claimed": true, 00:10:35.440 "claim_type": "exclusive_write", 00:10:35.440 "zoned": false, 00:10:35.440 "supported_io_types": { 00:10:35.440 "read": true, 00:10:35.440 "write": true, 00:10:35.440 "unmap": true, 00:10:35.440 "flush": true, 00:10:35.440 "reset": true, 00:10:35.440 "nvme_admin": false, 00:10:35.440 "nvme_io": false, 00:10:35.440 "nvme_io_md": false, 00:10:35.440 "write_zeroes": true, 00:10:35.440 "zcopy": true, 00:10:35.440 "get_zone_info": false, 00:10:35.440 "zone_management": false, 00:10:35.440 "zone_append": false, 00:10:35.440 "compare": false, 00:10:35.440 "compare_and_write": false, 00:10:35.440 "abort": true, 00:10:35.440 "seek_hole": false, 00:10:35.440 "seek_data": false, 00:10:35.440 "copy": true, 00:10:35.440 "nvme_iov_md": false 00:10:35.440 }, 00:10:35.440 "memory_domains": [ 00:10:35.440 { 00:10:35.440 "dma_device_id": "system", 00:10:35.440 "dma_device_type": 1 00:10:35.440 }, 00:10:35.440 { 00:10:35.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.440 "dma_device_type": 2 00:10:35.440 } 00:10:35.440 ], 00:10:35.440 "driver_specific": {} 00:10:35.440 }' 00:10:35.440 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:35.440 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:35.440 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:35.440 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:35.440 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:35.440 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:35.440 18:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:35.440 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:35.699 [2024-07-24 18:13:44.270044] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:35.699 [2024-07-24 18:13:44.270061] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.699 [2024-07-24 18:13:44.270090] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:35.699 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.958 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.958 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:35.958 "name": "Existed_Raid", 00:10:35.958 "uuid": "27dbc711-5f96-48e2-82e9-584afc2cedbe", 00:10:35.958 "strip_size_kb": 64, 00:10:35.958 "state": "offline", 00:10:35.958 "raid_level": "concat", 00:10:35.958 "superblock": true, 00:10:35.958 "num_base_bdevs": 2, 00:10:35.958 "num_base_bdevs_discovered": 1, 00:10:35.958 "num_base_bdevs_operational": 1, 00:10:35.958 "base_bdevs_list": [ 00:10:35.958 { 00:10:35.958 "name": null, 00:10:35.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.958 "is_configured": false, 00:10:35.958 "data_offset": 2048, 00:10:35.958 "data_size": 63488 00:10:35.958 }, 00:10:35.958 { 00:10:35.958 "name": "BaseBdev2", 00:10:35.958 "uuid": "d863041a-11e4-4e09-9b7a-4c2382a94957", 00:10:35.958 "is_configured": true, 00:10:35.958 "data_offset": 2048, 00:10:35.958 "data_size": 63488 00:10:35.958 } 00:10:35.958 ] 00:10:35.958 }' 00:10:35.959 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:35.959 18:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.527 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:36.527 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:36.527 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.527 18:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:36.527 18:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:36.527 18:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.527 18:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:36.786 [2024-07-24 18:13:45.269491] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.786 [2024-07-24 18:13:45.269524] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fb0880 name Existed_Raid, state offline 00:10:36.786 18:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:36.786 18:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:36.786 18:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.786 18:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2158449 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2158449 ']' 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2158449 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2158449 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2158449' 00:10:37.045 killing process with pid 2158449 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2158449 00:10:37.045 [2024-07-24 18:13:45.538032] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.045 18:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2158449 00:10:37.045 [2024-07-24 18:13:45.538816] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.304 18:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:37.304 00:10:37.304 real 0m8.047s 00:10:37.304 user 0m14.148s 00:10:37.304 sys 0m1.585s 00:10:37.304 18:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.304 18:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.304 ************************************ 00:10:37.305 END TEST raid_state_function_test_sb 00:10:37.305 ************************************ 00:10:37.305 18:13:45 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:37.305 18:13:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:37.305 18:13:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.305 18:13:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.305 ************************************ 00:10:37.305 START TEST raid_superblock_test 00:10:37.305 ************************************ 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2160042 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2160042 /var/tmp/spdk-raid.sock 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2160042 ']' 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:37.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.305 18:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.305 [2024-07-24 18:13:45.849893] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:10:37.305 [2024-07-24 18:13:45.849936] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160042 ] 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:01.0 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:01.1 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:01.2 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:01.3 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:01.4 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:01.5 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:01.6 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:01.7 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:02.0 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:02.1 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:02.2 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:02.3 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:02.4 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:02.5 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:02.6 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b3:02.7 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b5:01.0 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b5:01.1 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b5:01.2 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b5:01.3 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b5:01.4 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b5:01.5 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b5:01.6 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b5:01.7 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b5:02.0 cannot be used 00:10:37.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.305 EAL: Requested device 0000:b5:02.1 cannot be used 00:10:37.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.564 EAL: Requested device 0000:b5:02.2 cannot be used 00:10:37.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.564 EAL: Requested device 0000:b5:02.3 cannot be used 00:10:37.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.564 EAL: Requested device 0000:b5:02.4 cannot be used 00:10:37.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.564 EAL: Requested device 0000:b5:02.5 cannot be used 00:10:37.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.564 EAL: Requested device 0000:b5:02.6 cannot be used 00:10:37.564 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.564 EAL: Requested device 0000:b5:02.7 cannot be used 00:10:37.564 [2024-07-24 18:13:45.942464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.564 [2024-07-24 18:13:46.018325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.564 [2024-07-24 18:13:46.073109] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.564 [2024-07-24 18:13:46.073135] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.132 18:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.132 18:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:38.132 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:38.132 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:38.132 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:38.132 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:38.132 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:38.132 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:38.132 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:38.132 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:38.132 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:38.391 malloc1 00:10:38.391 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:38.391 [2024-07-24 18:13:46.965624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:38.391 [2024-07-24 18:13:46.965666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.391 [2024-07-24 18:13:46.965678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18e8cb0 00:10:38.391 [2024-07-24 18:13:46.965686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.391 [2024-07-24 18:13:46.966695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.391 [2024-07-24 18:13:46.966717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:38.391 pt1 00:10:38.391 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:38.391 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:38.391 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:38.391 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:38.392 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:38.392 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:38.392 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:38.651 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:38.651 18:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:38.651 malloc2 00:10:38.651 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:38.910 [2024-07-24 18:13:47.305993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:38.910 [2024-07-24 18:13:47.306022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.910 [2024-07-24 18:13:47.306032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18ea0b0 00:10:38.910 [2024-07-24 18:13:47.306039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.910 [2024-07-24 18:13:47.307035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.910 [2024-07-24 18:13:47.307060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:38.910 pt2 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:10:38.910 [2024-07-24 18:13:47.478462] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:38.910 [2024-07-24 18:13:47.479265] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:38.910 [2024-07-24 18:13:47.479362] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a8c9b0 00:10:38.910 [2024-07-24 18:13:47.479371] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:38.910 [2024-07-24 18:13:47.479490] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a82350 00:10:38.910 [2024-07-24 18:13:47.479584] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a8c9b0 00:10:38.910 [2024-07-24 18:13:47.479591] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a8c9b0 00:10:38.910 [2024-07-24 18:13:47.479658] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.910 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.170 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:39.170 "name": "raid_bdev1", 00:10:39.170 "uuid": "e13bab36-b94d-4216-9edb-83c361330f4c", 00:10:39.170 "strip_size_kb": 64, 00:10:39.170 "state": "online", 00:10:39.170 "raid_level": "concat", 00:10:39.170 "superblock": true, 00:10:39.170 "num_base_bdevs": 2, 00:10:39.170 "num_base_bdevs_discovered": 2, 00:10:39.170 "num_base_bdevs_operational": 2, 00:10:39.170 "base_bdevs_list": [ 00:10:39.170 { 00:10:39.170 "name": "pt1", 00:10:39.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.170 "is_configured": true, 00:10:39.170 "data_offset": 2048, 00:10:39.170 "data_size": 63488 00:10:39.170 }, 00:10:39.170 { 00:10:39.170 "name": "pt2", 00:10:39.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.170 "is_configured": true, 00:10:39.170 "data_offset": 2048, 00:10:39.170 "data_size": 63488 00:10:39.170 } 00:10:39.170 ] 00:10:39.170 }' 00:10:39.170 18:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:39.170 18:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.737 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:39.737 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:39.737 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:39.737 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:39.737 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:39.737 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:39.737 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:39.737 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:39.737 [2024-07-24 18:13:48.328810] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.996 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:39.996 "name": "raid_bdev1", 00:10:39.996 "aliases": [ 00:10:39.996 "e13bab36-b94d-4216-9edb-83c361330f4c" 00:10:39.996 ], 00:10:39.996 "product_name": "Raid Volume", 00:10:39.996 "block_size": 512, 00:10:39.996 "num_blocks": 126976, 00:10:39.996 "uuid": "e13bab36-b94d-4216-9edb-83c361330f4c", 00:10:39.996 "assigned_rate_limits": { 00:10:39.996 "rw_ios_per_sec": 0, 00:10:39.996 "rw_mbytes_per_sec": 0, 00:10:39.996 "r_mbytes_per_sec": 0, 00:10:39.996 "w_mbytes_per_sec": 0 00:10:39.996 }, 00:10:39.996 "claimed": false, 00:10:39.996 "zoned": false, 00:10:39.996 "supported_io_types": { 00:10:39.996 "read": true, 00:10:39.996 "write": true, 00:10:39.996 "unmap": true, 00:10:39.996 "flush": true, 00:10:39.996 "reset": true, 00:10:39.996 "nvme_admin": false, 00:10:39.996 "nvme_io": false, 00:10:39.996 "nvme_io_md": false, 00:10:39.996 "write_zeroes": true, 00:10:39.996 "zcopy": false, 00:10:39.996 "get_zone_info": false, 00:10:39.996 "zone_management": false, 00:10:39.996 "zone_append": false, 00:10:39.996 "compare": false, 00:10:39.996 "compare_and_write": false, 00:10:39.996 "abort": false, 00:10:39.996 "seek_hole": false, 00:10:39.996 "seek_data": false, 00:10:39.996 "copy": false, 00:10:39.996 "nvme_iov_md": false 00:10:39.996 }, 00:10:39.996 "memory_domains": [ 00:10:39.996 { 00:10:39.996 "dma_device_id": "system", 00:10:39.996 "dma_device_type": 1 00:10:39.996 }, 00:10:39.996 { 00:10:39.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.996 "dma_device_type": 2 00:10:39.996 }, 00:10:39.996 { 00:10:39.996 "dma_device_id": "system", 00:10:39.996 "dma_device_type": 1 00:10:39.996 }, 00:10:39.996 { 00:10:39.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.996 "dma_device_type": 2 00:10:39.996 } 00:10:39.996 ], 00:10:39.996 "driver_specific": { 00:10:39.996 "raid": { 00:10:39.996 "uuid": "e13bab36-b94d-4216-9edb-83c361330f4c", 00:10:39.996 "strip_size_kb": 64, 00:10:39.996 "state": "online", 00:10:39.996 "raid_level": "concat", 00:10:39.996 "superblock": true, 00:10:39.996 "num_base_bdevs": 2, 00:10:39.996 "num_base_bdevs_discovered": 2, 00:10:39.996 "num_base_bdevs_operational": 2, 00:10:39.996 "base_bdevs_list": [ 00:10:39.996 { 00:10:39.996 "name": "pt1", 00:10:39.996 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.996 "is_configured": true, 00:10:39.996 "data_offset": 2048, 00:10:39.996 "data_size": 63488 00:10:39.996 }, 00:10:39.996 { 00:10:39.996 "name": "pt2", 00:10:39.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.996 "is_configured": true, 00:10:39.996 "data_offset": 2048, 00:10:39.996 "data_size": 63488 00:10:39.997 } 00:10:39.997 ] 00:10:39.997 } 00:10:39.997 } 00:10:39.997 }' 00:10:39.997 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.997 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:39.997 pt2' 00:10:39.997 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:39.997 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:39.997 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:39.997 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:39.997 "name": "pt1", 00:10:39.997 "aliases": [ 00:10:39.997 "00000000-0000-0000-0000-000000000001" 00:10:39.997 ], 00:10:39.997 "product_name": "passthru", 00:10:39.997 "block_size": 512, 00:10:39.997 "num_blocks": 65536, 00:10:39.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.997 "assigned_rate_limits": { 00:10:39.997 "rw_ios_per_sec": 0, 00:10:39.997 "rw_mbytes_per_sec": 0, 00:10:39.997 "r_mbytes_per_sec": 0, 00:10:39.997 "w_mbytes_per_sec": 0 00:10:39.997 }, 00:10:39.997 "claimed": true, 00:10:39.997 "claim_type": "exclusive_write", 00:10:39.997 "zoned": false, 00:10:39.997 "supported_io_types": { 00:10:39.997 "read": true, 00:10:39.997 "write": true, 00:10:39.997 "unmap": true, 00:10:39.997 "flush": true, 00:10:39.997 "reset": true, 00:10:39.997 "nvme_admin": false, 00:10:39.997 "nvme_io": false, 00:10:39.997 "nvme_io_md": false, 00:10:39.997 "write_zeroes": true, 00:10:39.997 "zcopy": true, 00:10:39.997 "get_zone_info": false, 00:10:39.997 "zone_management": false, 00:10:39.997 "zone_append": false, 00:10:39.997 "compare": false, 00:10:39.997 "compare_and_write": false, 00:10:39.997 "abort": true, 00:10:39.997 "seek_hole": false, 00:10:39.997 "seek_data": false, 00:10:39.997 "copy": true, 00:10:39.997 "nvme_iov_md": false 00:10:39.997 }, 00:10:39.997 "memory_domains": [ 00:10:39.997 { 00:10:39.997 "dma_device_id": "system", 00:10:39.997 "dma_device_type": 1 00:10:39.997 }, 00:10:39.997 { 00:10:39.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.997 "dma_device_type": 2 00:10:39.997 } 00:10:39.997 ], 00:10:39.997 "driver_specific": { 00:10:39.997 "passthru": { 00:10:39.997 "name": "pt1", 00:10:39.997 "base_bdev_name": "malloc1" 00:10:39.997 } 00:10:39.997 } 00:10:39.997 }' 00:10:39.997 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:40.256 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:40.256 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:40.256 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:40.256 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:40.256 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:40.256 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:40.256 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:40.256 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:40.256 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:40.256 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:40.515 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:40.515 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:40.515 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:40.515 18:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:40.515 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:40.515 "name": "pt2", 00:10:40.515 "aliases": [ 00:10:40.515 "00000000-0000-0000-0000-000000000002" 00:10:40.515 ], 00:10:40.515 "product_name": "passthru", 00:10:40.515 "block_size": 512, 00:10:40.515 "num_blocks": 65536, 00:10:40.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.515 "assigned_rate_limits": { 00:10:40.515 "rw_ios_per_sec": 0, 00:10:40.515 "rw_mbytes_per_sec": 0, 00:10:40.515 "r_mbytes_per_sec": 0, 00:10:40.515 "w_mbytes_per_sec": 0 00:10:40.515 }, 00:10:40.515 "claimed": true, 00:10:40.515 "claim_type": "exclusive_write", 00:10:40.515 "zoned": false, 00:10:40.515 "supported_io_types": { 00:10:40.515 "read": true, 00:10:40.515 "write": true, 00:10:40.515 "unmap": true, 00:10:40.515 "flush": true, 00:10:40.515 "reset": true, 00:10:40.515 "nvme_admin": false, 00:10:40.515 "nvme_io": false, 00:10:40.515 "nvme_io_md": false, 00:10:40.515 "write_zeroes": true, 00:10:40.515 "zcopy": true, 00:10:40.515 "get_zone_info": false, 00:10:40.515 "zone_management": false, 00:10:40.515 "zone_append": false, 00:10:40.515 "compare": false, 00:10:40.515 "compare_and_write": false, 00:10:40.515 "abort": true, 00:10:40.515 "seek_hole": false, 00:10:40.515 "seek_data": false, 00:10:40.515 "copy": true, 00:10:40.515 "nvme_iov_md": false 00:10:40.515 }, 00:10:40.515 "memory_domains": [ 00:10:40.515 { 00:10:40.515 "dma_device_id": "system", 00:10:40.515 "dma_device_type": 1 00:10:40.515 }, 00:10:40.515 { 00:10:40.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.515 "dma_device_type": 2 00:10:40.515 } 00:10:40.515 ], 00:10:40.515 "driver_specific": { 00:10:40.515 "passthru": { 00:10:40.515 "name": "pt2", 00:10:40.515 "base_bdev_name": "malloc2" 00:10:40.515 } 00:10:40.515 } 00:10:40.515 }' 00:10:40.515 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:40.515 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:40.515 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:40.515 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:40.774 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:40.774 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:40.774 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:40.774 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:40.774 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:40.775 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:40.775 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:40.775 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:40.775 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:40.775 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:41.034 [2024-07-24 18:13:49.499787] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.034 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=e13bab36-b94d-4216-9edb-83c361330f4c 00:10:41.034 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z e13bab36-b94d-4216-9edb-83c361330f4c ']' 00:10:41.034 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:41.294 [2024-07-24 18:13:49.664063] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.294 [2024-07-24 18:13:49.664076] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.294 [2024-07-24 18:13:49.664118] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.294 [2024-07-24 18:13:49.664150] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.294 [2024-07-24 18:13:49.664157] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a8c9b0 name raid_bdev1, state offline 00:10:41.294 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.294 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:41.294 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:41.294 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:41.294 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:41.294 18:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:41.555 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:41.555 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:41.814 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:42.073 [2024-07-24 18:13:50.514234] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:42.073 [2024-07-24 18:13:50.515186] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:42.073 [2024-07-24 18:13:50.515230] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:42.073 [2024-07-24 18:13:50.515260] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:42.073 [2024-07-24 18:13:50.515272] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.073 [2024-07-24 18:13:50.515277] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a8c730 name raid_bdev1, state configuring 00:10:42.073 request: 00:10:42.073 { 00:10:42.073 "name": "raid_bdev1", 00:10:42.073 "raid_level": "concat", 00:10:42.073 "base_bdevs": [ 00:10:42.073 "malloc1", 00:10:42.073 "malloc2" 00:10:42.073 ], 00:10:42.073 "strip_size_kb": 64, 00:10:42.073 "superblock": false, 00:10:42.073 "method": "bdev_raid_create", 00:10:42.073 "req_id": 1 00:10:42.073 } 00:10:42.073 Got JSON-RPC error response 00:10:42.073 response: 00:10:42.073 { 00:10:42.073 "code": -17, 00:10:42.073 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:42.073 } 00:10:42.073 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:42.073 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:42.073 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:42.073 18:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:42.073 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:42.073 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:42.333 [2024-07-24 18:13:50.863104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:42.333 [2024-07-24 18:13:50.863140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.333 [2024-07-24 18:13:50.863154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18e8ee0 00:10:42.333 [2024-07-24 18:13:50.863162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.333 [2024-07-24 18:13:50.864335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.333 [2024-07-24 18:13:50.864359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:42.333 [2024-07-24 18:13:50.864405] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:42.333 [2024-07-24 18:13:50.864423] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:42.333 pt1 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.333 18:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.624 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:42.624 "name": "raid_bdev1", 00:10:42.624 "uuid": "e13bab36-b94d-4216-9edb-83c361330f4c", 00:10:42.624 "strip_size_kb": 64, 00:10:42.624 "state": "configuring", 00:10:42.624 "raid_level": "concat", 00:10:42.624 "superblock": true, 00:10:42.624 "num_base_bdevs": 2, 00:10:42.624 "num_base_bdevs_discovered": 1, 00:10:42.624 "num_base_bdevs_operational": 2, 00:10:42.624 "base_bdevs_list": [ 00:10:42.624 { 00:10:42.624 "name": "pt1", 00:10:42.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:42.624 "is_configured": true, 00:10:42.624 "data_offset": 2048, 00:10:42.624 "data_size": 63488 00:10:42.624 }, 00:10:42.624 { 00:10:42.624 "name": null, 00:10:42.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:42.624 "is_configured": false, 00:10:42.624 "data_offset": 2048, 00:10:42.624 "data_size": 63488 00:10:42.624 } 00:10:42.624 ] 00:10:42.624 }' 00:10:42.624 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:42.624 18:13:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.228 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:10:43.228 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:43.229 [2024-07-24 18:13:51.689229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:43.229 [2024-07-24 18:13:51.689259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.229 [2024-07-24 18:13:51.689271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a82f60 00:10:43.229 [2024-07-24 18:13:51.689278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.229 [2024-07-24 18:13:51.689529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.229 [2024-07-24 18:13:51.689543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:43.229 [2024-07-24 18:13:51.689599] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:43.229 [2024-07-24 18:13:51.689613] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:43.229 [2024-07-24 18:13:51.689686] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x18df600 00:10:43.229 [2024-07-24 18:13:51.689694] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:43.229 [2024-07-24 18:13:51.689813] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18e0640 00:10:43.229 [2024-07-24 18:13:51.689895] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18df600 00:10:43.229 [2024-07-24 18:13:51.689902] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x18df600 00:10:43.229 [2024-07-24 18:13:51.689968] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.229 pt2 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.229 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.488 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:43.488 "name": "raid_bdev1", 00:10:43.488 "uuid": "e13bab36-b94d-4216-9edb-83c361330f4c", 00:10:43.488 "strip_size_kb": 64, 00:10:43.488 "state": "online", 00:10:43.488 "raid_level": "concat", 00:10:43.488 "superblock": true, 00:10:43.488 "num_base_bdevs": 2, 00:10:43.488 "num_base_bdevs_discovered": 2, 00:10:43.488 "num_base_bdevs_operational": 2, 00:10:43.488 "base_bdevs_list": [ 00:10:43.488 { 00:10:43.488 "name": "pt1", 00:10:43.488 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.488 "is_configured": true, 00:10:43.488 "data_offset": 2048, 00:10:43.488 "data_size": 63488 00:10:43.488 }, 00:10:43.488 { 00:10:43.488 "name": "pt2", 00:10:43.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.488 "is_configured": true, 00:10:43.488 "data_offset": 2048, 00:10:43.488 "data_size": 63488 00:10:43.488 } 00:10:43.488 ] 00:10:43.488 }' 00:10:43.488 18:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:43.488 18:13:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:44.056 [2024-07-24 18:13:52.535575] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:44.056 "name": "raid_bdev1", 00:10:44.056 "aliases": [ 00:10:44.056 "e13bab36-b94d-4216-9edb-83c361330f4c" 00:10:44.056 ], 00:10:44.056 "product_name": "Raid Volume", 00:10:44.056 "block_size": 512, 00:10:44.056 "num_blocks": 126976, 00:10:44.056 "uuid": "e13bab36-b94d-4216-9edb-83c361330f4c", 00:10:44.056 "assigned_rate_limits": { 00:10:44.056 "rw_ios_per_sec": 0, 00:10:44.056 "rw_mbytes_per_sec": 0, 00:10:44.056 "r_mbytes_per_sec": 0, 00:10:44.056 "w_mbytes_per_sec": 0 00:10:44.056 }, 00:10:44.056 "claimed": false, 00:10:44.056 "zoned": false, 00:10:44.056 "supported_io_types": { 00:10:44.056 "read": true, 00:10:44.056 "write": true, 00:10:44.056 "unmap": true, 00:10:44.056 "flush": true, 00:10:44.056 "reset": true, 00:10:44.056 "nvme_admin": false, 00:10:44.056 "nvme_io": false, 00:10:44.056 "nvme_io_md": false, 00:10:44.056 "write_zeroes": true, 00:10:44.056 "zcopy": false, 00:10:44.056 "get_zone_info": false, 00:10:44.056 "zone_management": false, 00:10:44.056 "zone_append": false, 00:10:44.056 "compare": false, 00:10:44.056 "compare_and_write": false, 00:10:44.056 "abort": false, 00:10:44.056 "seek_hole": false, 00:10:44.056 "seek_data": false, 00:10:44.056 "copy": false, 00:10:44.056 "nvme_iov_md": false 00:10:44.056 }, 00:10:44.056 "memory_domains": [ 00:10:44.056 { 00:10:44.056 "dma_device_id": "system", 00:10:44.056 "dma_device_type": 1 00:10:44.056 }, 00:10:44.056 { 00:10:44.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.056 "dma_device_type": 2 00:10:44.056 }, 00:10:44.056 { 00:10:44.056 "dma_device_id": "system", 00:10:44.056 "dma_device_type": 1 00:10:44.056 }, 00:10:44.056 { 00:10:44.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.056 "dma_device_type": 2 00:10:44.056 } 00:10:44.056 ], 00:10:44.056 "driver_specific": { 00:10:44.056 "raid": { 00:10:44.056 "uuid": "e13bab36-b94d-4216-9edb-83c361330f4c", 00:10:44.056 "strip_size_kb": 64, 00:10:44.056 "state": "online", 00:10:44.056 "raid_level": "concat", 00:10:44.056 "superblock": true, 00:10:44.056 "num_base_bdevs": 2, 00:10:44.056 "num_base_bdevs_discovered": 2, 00:10:44.056 "num_base_bdevs_operational": 2, 00:10:44.056 "base_bdevs_list": [ 00:10:44.056 { 00:10:44.056 "name": "pt1", 00:10:44.056 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.056 "is_configured": true, 00:10:44.056 "data_offset": 2048, 00:10:44.056 "data_size": 63488 00:10:44.056 }, 00:10:44.056 { 00:10:44.056 "name": "pt2", 00:10:44.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.056 "is_configured": true, 00:10:44.056 "data_offset": 2048, 00:10:44.056 "data_size": 63488 00:10:44.056 } 00:10:44.056 ] 00:10:44.056 } 00:10:44.056 } 00:10:44.056 }' 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:44.056 pt2' 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:44.056 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:44.316 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:44.316 "name": "pt1", 00:10:44.316 "aliases": [ 00:10:44.316 "00000000-0000-0000-0000-000000000001" 00:10:44.316 ], 00:10:44.316 "product_name": "passthru", 00:10:44.316 "block_size": 512, 00:10:44.316 "num_blocks": 65536, 00:10:44.316 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.316 "assigned_rate_limits": { 00:10:44.316 "rw_ios_per_sec": 0, 00:10:44.316 "rw_mbytes_per_sec": 0, 00:10:44.316 "r_mbytes_per_sec": 0, 00:10:44.316 "w_mbytes_per_sec": 0 00:10:44.316 }, 00:10:44.316 "claimed": true, 00:10:44.316 "claim_type": "exclusive_write", 00:10:44.316 "zoned": false, 00:10:44.316 "supported_io_types": { 00:10:44.316 "read": true, 00:10:44.316 "write": true, 00:10:44.316 "unmap": true, 00:10:44.316 "flush": true, 00:10:44.316 "reset": true, 00:10:44.316 "nvme_admin": false, 00:10:44.316 "nvme_io": false, 00:10:44.316 "nvme_io_md": false, 00:10:44.316 "write_zeroes": true, 00:10:44.316 "zcopy": true, 00:10:44.316 "get_zone_info": false, 00:10:44.316 "zone_management": false, 00:10:44.316 "zone_append": false, 00:10:44.316 "compare": false, 00:10:44.316 "compare_and_write": false, 00:10:44.316 "abort": true, 00:10:44.316 "seek_hole": false, 00:10:44.316 "seek_data": false, 00:10:44.316 "copy": true, 00:10:44.316 "nvme_iov_md": false 00:10:44.316 }, 00:10:44.316 "memory_domains": [ 00:10:44.316 { 00:10:44.316 "dma_device_id": "system", 00:10:44.316 "dma_device_type": 1 00:10:44.316 }, 00:10:44.316 { 00:10:44.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.316 "dma_device_type": 2 00:10:44.316 } 00:10:44.316 ], 00:10:44.316 "driver_specific": { 00:10:44.316 "passthru": { 00:10:44.316 "name": "pt1", 00:10:44.316 "base_bdev_name": "malloc1" 00:10:44.316 } 00:10:44.316 } 00:10:44.316 }' 00:10:44.316 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:44.316 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:44.316 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:44.316 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:44.316 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:44.575 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:44.575 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:44.575 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:44.575 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:44.575 18:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:44.575 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:44.575 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:44.575 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:44.575 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:44.575 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:44.836 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:44.836 "name": "pt2", 00:10:44.836 "aliases": [ 00:10:44.836 "00000000-0000-0000-0000-000000000002" 00:10:44.836 ], 00:10:44.836 "product_name": "passthru", 00:10:44.836 "block_size": 512, 00:10:44.836 "num_blocks": 65536, 00:10:44.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.836 "assigned_rate_limits": { 00:10:44.836 "rw_ios_per_sec": 0, 00:10:44.836 "rw_mbytes_per_sec": 0, 00:10:44.836 "r_mbytes_per_sec": 0, 00:10:44.836 "w_mbytes_per_sec": 0 00:10:44.836 }, 00:10:44.836 "claimed": true, 00:10:44.836 "claim_type": "exclusive_write", 00:10:44.836 "zoned": false, 00:10:44.836 "supported_io_types": { 00:10:44.836 "read": true, 00:10:44.836 "write": true, 00:10:44.836 "unmap": true, 00:10:44.836 "flush": true, 00:10:44.836 "reset": true, 00:10:44.836 "nvme_admin": false, 00:10:44.836 "nvme_io": false, 00:10:44.836 "nvme_io_md": false, 00:10:44.836 "write_zeroes": true, 00:10:44.836 "zcopy": true, 00:10:44.836 "get_zone_info": false, 00:10:44.836 "zone_management": false, 00:10:44.836 "zone_append": false, 00:10:44.836 "compare": false, 00:10:44.836 "compare_and_write": false, 00:10:44.836 "abort": true, 00:10:44.836 "seek_hole": false, 00:10:44.836 "seek_data": false, 00:10:44.836 "copy": true, 00:10:44.836 "nvme_iov_md": false 00:10:44.836 }, 00:10:44.836 "memory_domains": [ 00:10:44.836 { 00:10:44.836 "dma_device_id": "system", 00:10:44.836 "dma_device_type": 1 00:10:44.837 }, 00:10:44.837 { 00:10:44.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.837 "dma_device_type": 2 00:10:44.837 } 00:10:44.837 ], 00:10:44.837 "driver_specific": { 00:10:44.837 "passthru": { 00:10:44.837 "name": "pt2", 00:10:44.837 "base_bdev_name": "malloc2" 00:10:44.837 } 00:10:44.837 } 00:10:44.837 }' 00:10:44.837 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:44.837 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:44.837 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:44.837 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:44.837 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:44.837 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:44.837 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:44.837 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:45.107 [2024-07-24 18:13:53.682518] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' e13bab36-b94d-4216-9edb-83c361330f4c '!=' e13bab36-b94d-4216-9edb-83c361330f4c ']' 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2160042 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2160042 ']' 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2160042 00:10:45.107 18:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:45.365 18:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.365 18:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2160042 00:10:45.365 18:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:45.365 18:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:45.365 18:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2160042' 00:10:45.365 killing process with pid 2160042 00:10:45.365 18:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2160042 00:10:45.365 [2024-07-24 18:13:53.746470] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.366 [2024-07-24 18:13:53.746520] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.366 [2024-07-24 18:13:53.746551] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.366 [2024-07-24 18:13:53.746558] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18df600 name raid_bdev1, state offline 00:10:45.366 18:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2160042 00:10:45.366 [2024-07-24 18:13:53.761460] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.366 18:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:45.366 00:10:45.366 real 0m8.140s 00:10:45.366 user 0m14.262s 00:10:45.366 sys 0m1.636s 00:10:45.366 18:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.366 18:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.366 ************************************ 00:10:45.366 END TEST raid_superblock_test 00:10:45.366 ************************************ 00:10:45.624 18:13:53 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:45.624 18:13:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:45.624 18:13:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.624 18:13:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.624 ************************************ 00:10:45.625 START TEST raid_read_error_test 00:10:45.625 ************************************ 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.hwJ1vJNVES 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2161829 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2161829 /var/tmp/spdk-raid.sock 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2161829 ']' 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:45.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.625 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:45.625 [2024-07-24 18:13:54.078833] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:10:45.625 [2024-07-24 18:13:54.078877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161829 ] 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:01.0 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:01.1 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:01.2 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:01.3 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:01.4 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:01.5 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:01.6 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:01.7 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:02.0 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:02.1 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:02.2 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:02.3 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:02.4 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:02.5 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:02.6 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b3:02.7 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:01.0 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:01.1 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:01.2 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:01.3 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:01.4 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:01.5 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:01.6 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:01.7 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:02.0 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:02.1 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:02.2 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:02.3 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:02.4 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:02.5 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:02.6 cannot be used 00:10:45.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:45.625 EAL: Requested device 0000:b5:02.7 cannot be used 00:10:45.625 [2024-07-24 18:13:54.172851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.884 [2024-07-24 18:13:54.245730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.884 [2024-07-24 18:13:54.304391] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.884 [2024-07-24 18:13:54.304420] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.452 18:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.452 18:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:46.452 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:46.452 18:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:46.452 BaseBdev1_malloc 00:10:46.452 18:13:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:46.710 true 00:10:46.710 18:13:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:46.970 [2024-07-24 18:13:55.349864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:46.970 [2024-07-24 18:13:55.349898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.970 [2024-07-24 18:13:55.349911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1157ed0 00:10:46.970 [2024-07-24 18:13:55.349920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.970 [2024-07-24 18:13:55.351109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.970 [2024-07-24 18:13:55.351132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:46.970 BaseBdev1 00:10:46.970 18:13:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:46.970 18:13:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:46.970 BaseBdev2_malloc 00:10:46.970 18:13:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:47.229 true 00:10:47.229 18:13:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:47.489 [2024-07-24 18:13:55.846735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:47.489 [2024-07-24 18:13:55.846767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.489 [2024-07-24 18:13:55.846784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x115cb60 00:10:47.489 [2024-07-24 18:13:55.846792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.489 [2024-07-24 18:13:55.847826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.489 [2024-07-24 18:13:55.847849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:47.489 BaseBdev2 00:10:47.489 18:13:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:47.489 [2024-07-24 18:13:55.999147] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.489 [2024-07-24 18:13:55.999939] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.489 [2024-07-24 18:13:56.000071] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x115e790 00:10:47.489 [2024-07-24 18:13:56.000080] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:47.489 [2024-07-24 18:13:56.000199] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x115dff0 00:10:47.489 [2024-07-24 18:13:56.000290] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x115e790 00:10:47.489 [2024-07-24 18:13:56.000296] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x115e790 00:10:47.489 [2024-07-24 18:13:56.000359] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.489 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:47.489 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:47.489 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:47.489 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:47.489 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:47.489 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:47.489 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:47.489 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:47.489 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:47.489 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:47.489 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.489 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.748 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:47.748 "name": "raid_bdev1", 00:10:47.748 "uuid": "ee9c857e-a3a7-4ab5-a9b0-0c2e7826511f", 00:10:47.748 "strip_size_kb": 64, 00:10:47.748 "state": "online", 00:10:47.748 "raid_level": "concat", 00:10:47.748 "superblock": true, 00:10:47.748 "num_base_bdevs": 2, 00:10:47.748 "num_base_bdevs_discovered": 2, 00:10:47.748 "num_base_bdevs_operational": 2, 00:10:47.748 "base_bdevs_list": [ 00:10:47.748 { 00:10:47.748 "name": "BaseBdev1", 00:10:47.748 "uuid": "8e35050d-47cf-5d78-8e4b-e970340ebde0", 00:10:47.748 "is_configured": true, 00:10:47.748 "data_offset": 2048, 00:10:47.748 "data_size": 63488 00:10:47.748 }, 00:10:47.748 { 00:10:47.748 "name": "BaseBdev2", 00:10:47.748 "uuid": "b0e726e0-c02c-5c85-b26e-ced0e37bf108", 00:10:47.748 "is_configured": true, 00:10:47.748 "data_offset": 2048, 00:10:47.748 "data_size": 63488 00:10:47.749 } 00:10:47.749 ] 00:10:47.749 }' 00:10:47.749 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:47.749 18:13:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.317 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:48.318 18:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:48.318 [2024-07-24 18:13:56.769354] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1159890 00:10:49.256 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.515 18:13:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.515 18:13:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.515 "name": "raid_bdev1", 00:10:49.515 "uuid": "ee9c857e-a3a7-4ab5-a9b0-0c2e7826511f", 00:10:49.515 "strip_size_kb": 64, 00:10:49.515 "state": "online", 00:10:49.515 "raid_level": "concat", 00:10:49.515 "superblock": true, 00:10:49.515 "num_base_bdevs": 2, 00:10:49.515 "num_base_bdevs_discovered": 2, 00:10:49.515 "num_base_bdevs_operational": 2, 00:10:49.515 "base_bdevs_list": [ 00:10:49.515 { 00:10:49.515 "name": "BaseBdev1", 00:10:49.515 "uuid": "8e35050d-47cf-5d78-8e4b-e970340ebde0", 00:10:49.515 "is_configured": true, 00:10:49.515 "data_offset": 2048, 00:10:49.515 "data_size": 63488 00:10:49.515 }, 00:10:49.515 { 00:10:49.515 "name": "BaseBdev2", 00:10:49.515 "uuid": "b0e726e0-c02c-5c85-b26e-ced0e37bf108", 00:10:49.515 "is_configured": true, 00:10:49.515 "data_offset": 2048, 00:10:49.515 "data_size": 63488 00:10:49.515 } 00:10:49.515 ] 00:10:49.515 }' 00:10:49.515 18:13:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.515 18:13:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.084 18:13:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:50.343 [2024-07-24 18:13:58.681187] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.343 [2024-07-24 18:13:58.681213] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.343 [2024-07-24 18:13:58.683264] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.343 [2024-07-24 18:13:58.683286] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.343 [2024-07-24 18:13:58.683304] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.343 [2024-07-24 18:13:58.683312] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x115e790 name raid_bdev1, state offline 00:10:50.343 0 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2161829 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2161829 ']' 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2161829 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2161829 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2161829' 00:10:50.343 killing process with pid 2161829 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2161829 00:10:50.343 [2024-07-24 18:13:58.751378] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2161829 00:10:50.343 [2024-07-24 18:13:58.760873] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.hwJ1vJNVES 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:50.343 18:13:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:50.603 18:13:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:10:50.603 18:13:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:50.603 18:13:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:50.603 18:13:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:50.603 18:13:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:10:50.603 00:10:50.603 real 0m4.932s 00:10:50.603 user 0m7.406s 00:10:50.603 sys 0m0.874s 00:10:50.603 18:13:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.603 18:13:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.603 ************************************ 00:10:50.603 END TEST raid_read_error_test 00:10:50.603 ************************************ 00:10:50.603 18:13:58 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:50.603 18:13:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:50.603 18:13:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.603 18:13:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.603 ************************************ 00:10:50.603 START TEST raid_write_error_test 00:10:50.603 ************************************ 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.nmVcnKPORb 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2162736 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2162736 /var/tmp/spdk-raid.sock 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2162736 ']' 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:50.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.603 18:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.603 [2024-07-24 18:13:59.079389] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:10:50.603 [2024-07-24 18:13:59.079432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162736 ] 00:10:50.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.603 EAL: Requested device 0000:b3:01.0 cannot be used 00:10:50.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.603 EAL: Requested device 0000:b3:01.1 cannot be used 00:10:50.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.603 EAL: Requested device 0000:b3:01.2 cannot be used 00:10:50.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.603 EAL: Requested device 0000:b3:01.3 cannot be used 00:10:50.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.603 EAL: Requested device 0000:b3:01.4 cannot be used 00:10:50.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.603 EAL: Requested device 0000:b3:01.5 cannot be used 00:10:50.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.603 EAL: Requested device 0000:b3:01.6 cannot be used 00:10:50.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.603 EAL: Requested device 0000:b3:01.7 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b3:02.0 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b3:02.1 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b3:02.2 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b3:02.3 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b3:02.4 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b3:02.5 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b3:02.6 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b3:02.7 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:01.0 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:01.1 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:01.2 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:01.3 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:01.4 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:01.5 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:01.6 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:01.7 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:02.0 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:02.1 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:02.2 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:02.3 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:02.4 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:02.5 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:02.6 cannot be used 00:10:50.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.604 EAL: Requested device 0000:b5:02.7 cannot be used 00:10:50.604 [2024-07-24 18:13:59.171050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.863 [2024-07-24 18:13:59.243950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.863 [2024-07-24 18:13:59.297115] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.863 [2024-07-24 18:13:59.297142] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.432 18:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.432 18:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:51.432 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:51.432 18:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:51.691 BaseBdev1_malloc 00:10:51.691 18:14:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:51.691 true 00:10:51.691 18:14:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:51.951 [2024-07-24 18:14:00.361206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:51.951 [2024-07-24 18:14:00.361240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.951 [2024-07-24 18:14:00.361254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c95ed0 00:10:51.951 [2024-07-24 18:14:00.361262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.951 [2024-07-24 18:14:00.362471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.951 [2024-07-24 18:14:00.362496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:51.951 BaseBdev1 00:10:51.951 18:14:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:51.951 18:14:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:51.951 BaseBdev2_malloc 00:10:52.210 18:14:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:52.210 true 00:10:52.210 18:14:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:52.470 [2024-07-24 18:14:00.898311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:52.470 [2024-07-24 18:14:00.898343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.470 [2024-07-24 18:14:00.898355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c9ab60 00:10:52.470 [2024-07-24 18:14:00.898363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.470 [2024-07-24 18:14:00.899379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.470 [2024-07-24 18:14:00.899401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:52.470 BaseBdev2 00:10:52.470 18:14:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:52.729 [2024-07-24 18:14:01.070785] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.729 [2024-07-24 18:14:01.071648] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.729 [2024-07-24 18:14:01.071779] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c9c790 00:10:52.729 [2024-07-24 18:14:01.071788] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:52.729 [2024-07-24 18:14:01.071920] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c9bff0 00:10:52.729 [2024-07-24 18:14:01.072020] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c9c790 00:10:52.729 [2024-07-24 18:14:01.072027] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c9c790 00:10:52.729 [2024-07-24 18:14:01.072093] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:52.729 "name": "raid_bdev1", 00:10:52.729 "uuid": "36bac80c-8288-4cd2-b7da-ce4b585c1cbc", 00:10:52.729 "strip_size_kb": 64, 00:10:52.729 "state": "online", 00:10:52.729 "raid_level": "concat", 00:10:52.729 "superblock": true, 00:10:52.729 "num_base_bdevs": 2, 00:10:52.729 "num_base_bdevs_discovered": 2, 00:10:52.729 "num_base_bdevs_operational": 2, 00:10:52.729 "base_bdevs_list": [ 00:10:52.729 { 00:10:52.729 "name": "BaseBdev1", 00:10:52.729 "uuid": "ba3e628c-568e-56dd-825f-d5dd467a70bb", 00:10:52.729 "is_configured": true, 00:10:52.729 "data_offset": 2048, 00:10:52.729 "data_size": 63488 00:10:52.729 }, 00:10:52.729 { 00:10:52.729 "name": "BaseBdev2", 00:10:52.729 "uuid": "035caa60-d800-5f9c-8c28-b5dd820a69b6", 00:10:52.729 "is_configured": true, 00:10:52.729 "data_offset": 2048, 00:10:52.729 "data_size": 63488 00:10:52.729 } 00:10:52.729 ] 00:10:52.729 }' 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:52.729 18:14:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.296 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:53.297 18:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:53.297 [2024-07-24 18:14:01.808887] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c97890 00:10:54.234 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.494 18:14:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.753 18:14:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:54.753 "name": "raid_bdev1", 00:10:54.753 "uuid": "36bac80c-8288-4cd2-b7da-ce4b585c1cbc", 00:10:54.753 "strip_size_kb": 64, 00:10:54.753 "state": "online", 00:10:54.753 "raid_level": "concat", 00:10:54.753 "superblock": true, 00:10:54.753 "num_base_bdevs": 2, 00:10:54.753 "num_base_bdevs_discovered": 2, 00:10:54.753 "num_base_bdevs_operational": 2, 00:10:54.753 "base_bdevs_list": [ 00:10:54.753 { 00:10:54.753 "name": "BaseBdev1", 00:10:54.753 "uuid": "ba3e628c-568e-56dd-825f-d5dd467a70bb", 00:10:54.753 "is_configured": true, 00:10:54.753 "data_offset": 2048, 00:10:54.753 "data_size": 63488 00:10:54.753 }, 00:10:54.753 { 00:10:54.753 "name": "BaseBdev2", 00:10:54.753 "uuid": "035caa60-d800-5f9c-8c28-b5dd820a69b6", 00:10:54.753 "is_configured": true, 00:10:54.753 "data_offset": 2048, 00:10:54.753 "data_size": 63488 00:10:54.753 } 00:10:54.753 ] 00:10:54.753 }' 00:10:54.753 18:14:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:54.753 18:14:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.011 18:14:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:55.269 [2024-07-24 18:14:03.753042] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.269 [2024-07-24 18:14:03.753068] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.269 [2024-07-24 18:14:03.755171] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.269 [2024-07-24 18:14:03.755194] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.269 [2024-07-24 18:14:03.755215] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.269 [2024-07-24 18:14:03.755229] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c9c790 name raid_bdev1, state offline 00:10:55.269 0 00:10:55.269 18:14:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2162736 00:10:55.269 18:14:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2162736 ']' 00:10:55.269 18:14:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2162736 00:10:55.269 18:14:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:55.269 18:14:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.269 18:14:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2162736 00:10:55.269 18:14:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:55.269 18:14:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:55.269 18:14:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2162736' 00:10:55.269 killing process with pid 2162736 00:10:55.269 18:14:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2162736 00:10:55.269 [2024-07-24 18:14:03.828161] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.269 18:14:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2162736 00:10:55.269 [2024-07-24 18:14:03.837623] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.528 18:14:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.nmVcnKPORb 00:10:55.528 18:14:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:55.528 18:14:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:55.528 18:14:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:10:55.528 18:14:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:55.528 18:14:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:55.528 18:14:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:55.528 18:14:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:10:55.528 00:10:55.528 real 0m5.014s 00:10:55.528 user 0m7.542s 00:10:55.528 sys 0m0.882s 00:10:55.528 18:14:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.528 18:14:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.528 ************************************ 00:10:55.528 END TEST raid_write_error_test 00:10:55.528 ************************************ 00:10:55.528 18:14:04 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:55.528 18:14:04 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:55.528 18:14:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:55.528 18:14:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.528 18:14:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.528 ************************************ 00:10:55.528 START TEST raid_state_function_test 00:10:55.528 ************************************ 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2163640 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2163640' 00:10:55.528 Process raid pid: 2163640 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2163640 /var/tmp/spdk-raid.sock 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2163640 ']' 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:55.528 18:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.529 18:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:55.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:55.529 18:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.529 18:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.788 [2024-07-24 18:14:04.168091] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:10:55.788 [2024-07-24 18:14:04.168138] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:01.0 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:01.1 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:01.2 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:01.3 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:01.4 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:01.5 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:01.6 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:01.7 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:02.0 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:02.1 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:02.2 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:02.3 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:02.4 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:02.5 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:02.6 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b3:02.7 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:01.0 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:01.1 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:01.2 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:01.3 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:01.4 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:01.5 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:01.6 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:01.7 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:02.0 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:02.1 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:02.2 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:02.3 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:02.4 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:02.5 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:02.6 cannot be used 00:10:55.788 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:55.788 EAL: Requested device 0000:b5:02.7 cannot be used 00:10:55.788 [2024-07-24 18:14:04.263827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.788 [2024-07-24 18:14:04.335167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.047 [2024-07-24 18:14:04.393489] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.047 [2024-07-24 18:14:04.393514] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.615 18:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.615 18:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:56.615 18:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:56.615 [2024-07-24 18:14:05.104622] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.615 [2024-07-24 18:14:05.104657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.615 [2024-07-24 18:14:05.104665] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.615 [2024-07-24 18:14:05.104673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.615 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:56.615 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:56.615 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:56.615 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:56.615 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:56.615 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:56.615 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:56.615 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:56.615 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:56.615 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:56.615 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.615 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.874 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:56.874 "name": "Existed_Raid", 00:10:56.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.874 "strip_size_kb": 0, 00:10:56.874 "state": "configuring", 00:10:56.874 "raid_level": "raid1", 00:10:56.874 "superblock": false, 00:10:56.874 "num_base_bdevs": 2, 00:10:56.874 "num_base_bdevs_discovered": 0, 00:10:56.874 "num_base_bdevs_operational": 2, 00:10:56.874 "base_bdevs_list": [ 00:10:56.874 { 00:10:56.874 "name": "BaseBdev1", 00:10:56.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.874 "is_configured": false, 00:10:56.874 "data_offset": 0, 00:10:56.874 "data_size": 0 00:10:56.874 }, 00:10:56.874 { 00:10:56.874 "name": "BaseBdev2", 00:10:56.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.874 "is_configured": false, 00:10:56.874 "data_offset": 0, 00:10:56.874 "data_size": 0 00:10:56.874 } 00:10:56.874 ] 00:10:56.874 }' 00:10:56.874 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:56.874 18:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.484 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:57.484 [2024-07-24 18:14:05.934694] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.484 [2024-07-24 18:14:05.934716] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8411a0 name Existed_Raid, state configuring 00:10:57.484 18:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:57.743 [2024-07-24 18:14:06.087091] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.743 [2024-07-24 18:14:06.087114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.743 [2024-07-24 18:14:06.087121] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.743 [2024-07-24 18:14:06.087128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.743 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:57.743 [2024-07-24 18:14:06.264115] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.743 BaseBdev1 00:10:57.743 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:57.743 18:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:57.743 18:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.743 18:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:57.743 18:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.743 18:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.743 18:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:58.002 18:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.002 [ 00:10:58.002 { 00:10:58.002 "name": "BaseBdev1", 00:10:58.002 "aliases": [ 00:10:58.002 "3a476b2e-e861-4a9b-9601-0abab54d6d47" 00:10:58.002 ], 00:10:58.002 "product_name": "Malloc disk", 00:10:58.002 "block_size": 512, 00:10:58.002 "num_blocks": 65536, 00:10:58.002 "uuid": "3a476b2e-e861-4a9b-9601-0abab54d6d47", 00:10:58.003 "assigned_rate_limits": { 00:10:58.003 "rw_ios_per_sec": 0, 00:10:58.003 "rw_mbytes_per_sec": 0, 00:10:58.003 "r_mbytes_per_sec": 0, 00:10:58.003 "w_mbytes_per_sec": 0 00:10:58.003 }, 00:10:58.003 "claimed": true, 00:10:58.003 "claim_type": "exclusive_write", 00:10:58.003 "zoned": false, 00:10:58.003 "supported_io_types": { 00:10:58.003 "read": true, 00:10:58.003 "write": true, 00:10:58.003 "unmap": true, 00:10:58.003 "flush": true, 00:10:58.003 "reset": true, 00:10:58.003 "nvme_admin": false, 00:10:58.003 "nvme_io": false, 00:10:58.003 "nvme_io_md": false, 00:10:58.003 "write_zeroes": true, 00:10:58.003 "zcopy": true, 00:10:58.003 "get_zone_info": false, 00:10:58.003 "zone_management": false, 00:10:58.003 "zone_append": false, 00:10:58.003 "compare": false, 00:10:58.003 "compare_and_write": false, 00:10:58.003 "abort": true, 00:10:58.003 "seek_hole": false, 00:10:58.003 "seek_data": false, 00:10:58.003 "copy": true, 00:10:58.003 "nvme_iov_md": false 00:10:58.003 }, 00:10:58.003 "memory_domains": [ 00:10:58.003 { 00:10:58.003 "dma_device_id": "system", 00:10:58.003 "dma_device_type": 1 00:10:58.003 }, 00:10:58.003 { 00:10:58.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.003 "dma_device_type": 2 00:10:58.003 } 00:10:58.003 ], 00:10:58.003 "driver_specific": {} 00:10:58.003 } 00:10:58.003 ] 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:58.262 "name": "Existed_Raid", 00:10:58.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.262 "strip_size_kb": 0, 00:10:58.262 "state": "configuring", 00:10:58.262 "raid_level": "raid1", 00:10:58.262 "superblock": false, 00:10:58.262 "num_base_bdevs": 2, 00:10:58.262 "num_base_bdevs_discovered": 1, 00:10:58.262 "num_base_bdevs_operational": 2, 00:10:58.262 "base_bdevs_list": [ 00:10:58.262 { 00:10:58.262 "name": "BaseBdev1", 00:10:58.262 "uuid": "3a476b2e-e861-4a9b-9601-0abab54d6d47", 00:10:58.262 "is_configured": true, 00:10:58.262 "data_offset": 0, 00:10:58.262 "data_size": 65536 00:10:58.262 }, 00:10:58.262 { 00:10:58.262 "name": "BaseBdev2", 00:10:58.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.262 "is_configured": false, 00:10:58.262 "data_offset": 0, 00:10:58.262 "data_size": 0 00:10:58.262 } 00:10:58.262 ] 00:10:58.262 }' 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:58.262 18:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.831 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:58.831 [2024-07-24 18:14:07.403033] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:58.831 [2024-07-24 18:14:07.403061] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x840a90 name Existed_Raid, state configuring 00:10:58.831 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:59.090 [2024-07-24 18:14:07.583512] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.090 [2024-07-24 18:14:07.584548] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:59.090 [2024-07-24 18:14:07.584574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.090 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.349 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:59.349 "name": "Existed_Raid", 00:10:59.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.349 "strip_size_kb": 0, 00:10:59.349 "state": "configuring", 00:10:59.349 "raid_level": "raid1", 00:10:59.349 "superblock": false, 00:10:59.349 "num_base_bdevs": 2, 00:10:59.349 "num_base_bdevs_discovered": 1, 00:10:59.349 "num_base_bdevs_operational": 2, 00:10:59.349 "base_bdevs_list": [ 00:10:59.349 { 00:10:59.349 "name": "BaseBdev1", 00:10:59.349 "uuid": "3a476b2e-e861-4a9b-9601-0abab54d6d47", 00:10:59.349 "is_configured": true, 00:10:59.349 "data_offset": 0, 00:10:59.350 "data_size": 65536 00:10:59.350 }, 00:10:59.350 { 00:10:59.350 "name": "BaseBdev2", 00:10:59.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.350 "is_configured": false, 00:10:59.350 "data_offset": 0, 00:10:59.350 "data_size": 0 00:10:59.350 } 00:10:59.350 ] 00:10:59.350 }' 00:10:59.350 18:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:59.350 18:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.918 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:59.918 [2024-07-24 18:14:08.404376] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.918 [2024-07-24 18:14:08.404404] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x841880 00:10:59.918 [2024-07-24 18:14:08.404410] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:59.918 [2024-07-24 18:14:08.404536] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9f49a0 00:10:59.918 [2024-07-24 18:14:08.404621] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x841880 00:10:59.918 [2024-07-24 18:14:08.404634] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x841880 00:10:59.918 [2024-07-24 18:14:08.404754] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.918 BaseBdev2 00:10:59.918 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:59.918 18:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:59.918 18:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.918 18:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:59.918 18:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.918 18:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.918 18:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:00.177 18:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:00.177 [ 00:11:00.177 { 00:11:00.177 "name": "BaseBdev2", 00:11:00.177 "aliases": [ 00:11:00.177 "6a1586da-16ca-4089-aa15-99f35f43fd23" 00:11:00.177 ], 00:11:00.177 "product_name": "Malloc disk", 00:11:00.177 "block_size": 512, 00:11:00.177 "num_blocks": 65536, 00:11:00.177 "uuid": "6a1586da-16ca-4089-aa15-99f35f43fd23", 00:11:00.177 "assigned_rate_limits": { 00:11:00.177 "rw_ios_per_sec": 0, 00:11:00.177 "rw_mbytes_per_sec": 0, 00:11:00.177 "r_mbytes_per_sec": 0, 00:11:00.177 "w_mbytes_per_sec": 0 00:11:00.177 }, 00:11:00.177 "claimed": true, 00:11:00.177 "claim_type": "exclusive_write", 00:11:00.177 "zoned": false, 00:11:00.177 "supported_io_types": { 00:11:00.177 "read": true, 00:11:00.177 "write": true, 00:11:00.177 "unmap": true, 00:11:00.177 "flush": true, 00:11:00.177 "reset": true, 00:11:00.177 "nvme_admin": false, 00:11:00.177 "nvme_io": false, 00:11:00.177 "nvme_io_md": false, 00:11:00.178 "write_zeroes": true, 00:11:00.178 "zcopy": true, 00:11:00.178 "get_zone_info": false, 00:11:00.178 "zone_management": false, 00:11:00.178 "zone_append": false, 00:11:00.178 "compare": false, 00:11:00.178 "compare_and_write": false, 00:11:00.178 "abort": true, 00:11:00.178 "seek_hole": false, 00:11:00.178 "seek_data": false, 00:11:00.178 "copy": true, 00:11:00.178 "nvme_iov_md": false 00:11:00.178 }, 00:11:00.178 "memory_domains": [ 00:11:00.178 { 00:11:00.178 "dma_device_id": "system", 00:11:00.178 "dma_device_type": 1 00:11:00.178 }, 00:11:00.178 { 00:11:00.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.178 "dma_device_type": 2 00:11:00.178 } 00:11:00.178 ], 00:11:00.178 "driver_specific": {} 00:11:00.178 } 00:11:00.178 ] 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.178 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.437 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:00.437 "name": "Existed_Raid", 00:11:00.437 "uuid": "413f4292-67fa-4227-9384-f363df09ba54", 00:11:00.437 "strip_size_kb": 0, 00:11:00.437 "state": "online", 00:11:00.437 "raid_level": "raid1", 00:11:00.437 "superblock": false, 00:11:00.437 "num_base_bdevs": 2, 00:11:00.437 "num_base_bdevs_discovered": 2, 00:11:00.437 "num_base_bdevs_operational": 2, 00:11:00.437 "base_bdevs_list": [ 00:11:00.437 { 00:11:00.437 "name": "BaseBdev1", 00:11:00.437 "uuid": "3a476b2e-e861-4a9b-9601-0abab54d6d47", 00:11:00.437 "is_configured": true, 00:11:00.437 "data_offset": 0, 00:11:00.437 "data_size": 65536 00:11:00.437 }, 00:11:00.437 { 00:11:00.437 "name": "BaseBdev2", 00:11:00.437 "uuid": "6a1586da-16ca-4089-aa15-99f35f43fd23", 00:11:00.437 "is_configured": true, 00:11:00.437 "data_offset": 0, 00:11:00.437 "data_size": 65536 00:11:00.437 } 00:11:00.437 ] 00:11:00.437 }' 00:11:00.437 18:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:00.438 18:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.005 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:01.005 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:01.005 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:01.005 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:01.005 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:01.005 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:01.005 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:01.005 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:01.005 [2024-07-24 18:14:09.591611] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.265 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:01.265 "name": "Existed_Raid", 00:11:01.265 "aliases": [ 00:11:01.265 "413f4292-67fa-4227-9384-f363df09ba54" 00:11:01.265 ], 00:11:01.265 "product_name": "Raid Volume", 00:11:01.265 "block_size": 512, 00:11:01.265 "num_blocks": 65536, 00:11:01.265 "uuid": "413f4292-67fa-4227-9384-f363df09ba54", 00:11:01.265 "assigned_rate_limits": { 00:11:01.265 "rw_ios_per_sec": 0, 00:11:01.265 "rw_mbytes_per_sec": 0, 00:11:01.265 "r_mbytes_per_sec": 0, 00:11:01.265 "w_mbytes_per_sec": 0 00:11:01.265 }, 00:11:01.265 "claimed": false, 00:11:01.265 "zoned": false, 00:11:01.265 "supported_io_types": { 00:11:01.265 "read": true, 00:11:01.265 "write": true, 00:11:01.265 "unmap": false, 00:11:01.265 "flush": false, 00:11:01.265 "reset": true, 00:11:01.265 "nvme_admin": false, 00:11:01.265 "nvme_io": false, 00:11:01.265 "nvme_io_md": false, 00:11:01.265 "write_zeroes": true, 00:11:01.265 "zcopy": false, 00:11:01.265 "get_zone_info": false, 00:11:01.265 "zone_management": false, 00:11:01.265 "zone_append": false, 00:11:01.265 "compare": false, 00:11:01.265 "compare_and_write": false, 00:11:01.265 "abort": false, 00:11:01.265 "seek_hole": false, 00:11:01.265 "seek_data": false, 00:11:01.265 "copy": false, 00:11:01.265 "nvme_iov_md": false 00:11:01.265 }, 00:11:01.265 "memory_domains": [ 00:11:01.265 { 00:11:01.265 "dma_device_id": "system", 00:11:01.265 "dma_device_type": 1 00:11:01.265 }, 00:11:01.265 { 00:11:01.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.265 "dma_device_type": 2 00:11:01.265 }, 00:11:01.265 { 00:11:01.265 "dma_device_id": "system", 00:11:01.265 "dma_device_type": 1 00:11:01.265 }, 00:11:01.265 { 00:11:01.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.265 "dma_device_type": 2 00:11:01.265 } 00:11:01.265 ], 00:11:01.265 "driver_specific": { 00:11:01.265 "raid": { 00:11:01.265 "uuid": "413f4292-67fa-4227-9384-f363df09ba54", 00:11:01.265 "strip_size_kb": 0, 00:11:01.265 "state": "online", 00:11:01.265 "raid_level": "raid1", 00:11:01.265 "superblock": false, 00:11:01.265 "num_base_bdevs": 2, 00:11:01.265 "num_base_bdevs_discovered": 2, 00:11:01.265 "num_base_bdevs_operational": 2, 00:11:01.265 "base_bdevs_list": [ 00:11:01.265 { 00:11:01.265 "name": "BaseBdev1", 00:11:01.265 "uuid": "3a476b2e-e861-4a9b-9601-0abab54d6d47", 00:11:01.265 "is_configured": true, 00:11:01.265 "data_offset": 0, 00:11:01.265 "data_size": 65536 00:11:01.265 }, 00:11:01.265 { 00:11:01.265 "name": "BaseBdev2", 00:11:01.265 "uuid": "6a1586da-16ca-4089-aa15-99f35f43fd23", 00:11:01.265 "is_configured": true, 00:11:01.265 "data_offset": 0, 00:11:01.265 "data_size": 65536 00:11:01.265 } 00:11:01.265 ] 00:11:01.265 } 00:11:01.265 } 00:11:01.265 }' 00:11:01.265 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.265 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:01.265 BaseBdev2' 00:11:01.265 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.265 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:01.265 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.265 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.265 "name": "BaseBdev1", 00:11:01.265 "aliases": [ 00:11:01.265 "3a476b2e-e861-4a9b-9601-0abab54d6d47" 00:11:01.265 ], 00:11:01.265 "product_name": "Malloc disk", 00:11:01.265 "block_size": 512, 00:11:01.265 "num_blocks": 65536, 00:11:01.265 "uuid": "3a476b2e-e861-4a9b-9601-0abab54d6d47", 00:11:01.265 "assigned_rate_limits": { 00:11:01.265 "rw_ios_per_sec": 0, 00:11:01.265 "rw_mbytes_per_sec": 0, 00:11:01.265 "r_mbytes_per_sec": 0, 00:11:01.265 "w_mbytes_per_sec": 0 00:11:01.265 }, 00:11:01.266 "claimed": true, 00:11:01.266 "claim_type": "exclusive_write", 00:11:01.266 "zoned": false, 00:11:01.266 "supported_io_types": { 00:11:01.266 "read": true, 00:11:01.266 "write": true, 00:11:01.266 "unmap": true, 00:11:01.266 "flush": true, 00:11:01.266 "reset": true, 00:11:01.266 "nvme_admin": false, 00:11:01.266 "nvme_io": false, 00:11:01.266 "nvme_io_md": false, 00:11:01.266 "write_zeroes": true, 00:11:01.266 "zcopy": true, 00:11:01.266 "get_zone_info": false, 00:11:01.266 "zone_management": false, 00:11:01.266 "zone_append": false, 00:11:01.266 "compare": false, 00:11:01.266 "compare_and_write": false, 00:11:01.266 "abort": true, 00:11:01.266 "seek_hole": false, 00:11:01.266 "seek_data": false, 00:11:01.266 "copy": true, 00:11:01.266 "nvme_iov_md": false 00:11:01.266 }, 00:11:01.266 "memory_domains": [ 00:11:01.266 { 00:11:01.266 "dma_device_id": "system", 00:11:01.266 "dma_device_type": 1 00:11:01.266 }, 00:11:01.266 { 00:11:01.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.266 "dma_device_type": 2 00:11:01.266 } 00:11:01.266 ], 00:11:01.266 "driver_specific": {} 00:11:01.266 }' 00:11:01.266 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.525 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.525 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.525 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.525 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.525 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.525 18:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.525 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.525 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.525 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.525 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.785 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.785 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.785 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:01.785 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.785 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.785 "name": "BaseBdev2", 00:11:01.785 "aliases": [ 00:11:01.785 "6a1586da-16ca-4089-aa15-99f35f43fd23" 00:11:01.785 ], 00:11:01.785 "product_name": "Malloc disk", 00:11:01.785 "block_size": 512, 00:11:01.785 "num_blocks": 65536, 00:11:01.785 "uuid": "6a1586da-16ca-4089-aa15-99f35f43fd23", 00:11:01.785 "assigned_rate_limits": { 00:11:01.785 "rw_ios_per_sec": 0, 00:11:01.785 "rw_mbytes_per_sec": 0, 00:11:01.785 "r_mbytes_per_sec": 0, 00:11:01.785 "w_mbytes_per_sec": 0 00:11:01.785 }, 00:11:01.785 "claimed": true, 00:11:01.785 "claim_type": "exclusive_write", 00:11:01.785 "zoned": false, 00:11:01.785 "supported_io_types": { 00:11:01.785 "read": true, 00:11:01.785 "write": true, 00:11:01.785 "unmap": true, 00:11:01.785 "flush": true, 00:11:01.785 "reset": true, 00:11:01.785 "nvme_admin": false, 00:11:01.785 "nvme_io": false, 00:11:01.785 "nvme_io_md": false, 00:11:01.785 "write_zeroes": true, 00:11:01.785 "zcopy": true, 00:11:01.785 "get_zone_info": false, 00:11:01.785 "zone_management": false, 00:11:01.785 "zone_append": false, 00:11:01.785 "compare": false, 00:11:01.785 "compare_and_write": false, 00:11:01.785 "abort": true, 00:11:01.785 "seek_hole": false, 00:11:01.785 "seek_data": false, 00:11:01.785 "copy": true, 00:11:01.785 "nvme_iov_md": false 00:11:01.785 }, 00:11:01.785 "memory_domains": [ 00:11:01.785 { 00:11:01.785 "dma_device_id": "system", 00:11:01.785 "dma_device_type": 1 00:11:01.785 }, 00:11:01.785 { 00:11:01.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.785 "dma_device_type": 2 00:11:01.785 } 00:11:01.785 ], 00:11:01.785 "driver_specific": {} 00:11:01.785 }' 00:11:01.785 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.785 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:02.045 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:02.045 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:02.045 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:02.045 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:02.045 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:02.045 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:02.045 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:02.045 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:02.045 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:02.045 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:02.045 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:02.304 [2024-07-24 18:14:10.746450] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:02.304 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:02.304 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:02.304 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:02.304 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:02.304 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:02.304 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:02.304 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:02.304 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:02.304 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:02.304 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:02.304 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:02.304 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:02.304 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:02.305 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:02.305 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:02.305 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.305 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:02.564 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:02.564 "name": "Existed_Raid", 00:11:02.564 "uuid": "413f4292-67fa-4227-9384-f363df09ba54", 00:11:02.564 "strip_size_kb": 0, 00:11:02.564 "state": "online", 00:11:02.564 "raid_level": "raid1", 00:11:02.564 "superblock": false, 00:11:02.564 "num_base_bdevs": 2, 00:11:02.564 "num_base_bdevs_discovered": 1, 00:11:02.564 "num_base_bdevs_operational": 1, 00:11:02.564 "base_bdevs_list": [ 00:11:02.564 { 00:11:02.564 "name": null, 00:11:02.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.564 "is_configured": false, 00:11:02.564 "data_offset": 0, 00:11:02.564 "data_size": 65536 00:11:02.564 }, 00:11:02.564 { 00:11:02.564 "name": "BaseBdev2", 00:11:02.564 "uuid": "6a1586da-16ca-4089-aa15-99f35f43fd23", 00:11:02.564 "is_configured": true, 00:11:02.564 "data_offset": 0, 00:11:02.564 "data_size": 65536 00:11:02.564 } 00:11:02.564 ] 00:11:02.564 }' 00:11:02.564 18:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:02.564 18:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.822 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:02.822 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:02.822 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:02.822 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.080 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:03.080 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:03.080 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:03.339 [2024-07-24 18:14:11.725827] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:03.339 [2024-07-24 18:14:11.725889] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.339 [2024-07-24 18:14:11.735657] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.339 [2024-07-24 18:14:11.735680] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.339 [2024-07-24 18:14:11.735688] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x841880 name Existed_Raid, state offline 00:11:03.339 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:03.339 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:03.339 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.339 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2163640 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2163640 ']' 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2163640 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2163640 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2163640' 00:11:03.598 killing process with pid 2163640 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2163640 00:11:03.598 [2024-07-24 18:14:11.994476] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.598 18:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2163640 00:11:03.598 [2024-07-24 18:14:11.995283] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.598 18:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:03.598 00:11:03.598 real 0m8.061s 00:11:03.598 user 0m14.149s 00:11:03.598 sys 0m1.615s 00:11:03.598 18:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.598 18:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.598 ************************************ 00:11:03.598 END TEST raid_state_function_test 00:11:03.598 ************************************ 00:11:03.857 18:14:12 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:11:03.857 18:14:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:03.857 18:14:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.857 18:14:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.857 ************************************ 00:11:03.857 START TEST raid_state_function_test_sb 00:11:03.857 ************************************ 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:03.857 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2165398 00:11:03.858 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2165398' 00:11:03.858 Process raid pid: 2165398 00:11:03.858 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:03.858 18:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2165398 /var/tmp/spdk-raid.sock 00:11:03.858 18:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2165398 ']' 00:11:03.858 18:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:03.858 18:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.858 18:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:03.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:03.858 18:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.858 18:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.858 [2024-07-24 18:14:12.312188] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:11:03.858 [2024-07-24 18:14:12.312232] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:01.0 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:01.1 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:01.2 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:01.3 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:01.4 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:01.5 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:01.6 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:01.7 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:02.0 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:02.1 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:02.2 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:02.3 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:02.4 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:02.5 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:02.6 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b3:02.7 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:01.0 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:01.1 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:01.2 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:01.3 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:01.4 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:01.5 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:01.6 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:01.7 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:02.0 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:02.1 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:02.2 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:02.3 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:02.4 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:02.5 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:02.6 cannot be used 00:11:03.858 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:03.858 EAL: Requested device 0000:b5:02.7 cannot be used 00:11:03.858 [2024-07-24 18:14:12.406561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.117 [2024-07-24 18:14:12.480143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.117 [2024-07-24 18:14:12.529611] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.117 [2024-07-24 18:14:12.529644] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.685 18:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.685 18:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:04.685 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:04.685 [2024-07-24 18:14:13.264595] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.685 [2024-07-24 18:14:13.264622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.685 [2024-07-24 18:14:13.264632] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.685 [2024-07-24 18:14:13.264640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.685 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:04.685 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:04.685 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:04.685 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:04.685 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:04.685 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:04.943 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:04.943 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:04.943 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:04.943 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:04.943 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.943 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.943 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:04.943 "name": "Existed_Raid", 00:11:04.943 "uuid": "cfeaca08-d42e-4f3f-8186-6bbf117a9e8f", 00:11:04.943 "strip_size_kb": 0, 00:11:04.943 "state": "configuring", 00:11:04.943 "raid_level": "raid1", 00:11:04.943 "superblock": true, 00:11:04.943 "num_base_bdevs": 2, 00:11:04.943 "num_base_bdevs_discovered": 0, 00:11:04.943 "num_base_bdevs_operational": 2, 00:11:04.943 "base_bdevs_list": [ 00:11:04.943 { 00:11:04.943 "name": "BaseBdev1", 00:11:04.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.943 "is_configured": false, 00:11:04.943 "data_offset": 0, 00:11:04.943 "data_size": 0 00:11:04.943 }, 00:11:04.943 { 00:11:04.943 "name": "BaseBdev2", 00:11:04.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.943 "is_configured": false, 00:11:04.943 "data_offset": 0, 00:11:04.943 "data_size": 0 00:11:04.943 } 00:11:04.943 ] 00:11:04.943 }' 00:11:04.943 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:04.943 18:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.510 18:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:05.510 [2024-07-24 18:14:14.090635] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.510 [2024-07-24 18:14:14.090657] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x135a1a0 name Existed_Raid, state configuring 00:11:05.510 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:05.769 [2024-07-24 18:14:14.259101] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.769 [2024-07-24 18:14:14.259120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.769 [2024-07-24 18:14:14.259126] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.769 [2024-07-24 18:14:14.259133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.769 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:06.028 [2024-07-24 18:14:14.424090] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.028 BaseBdev1 00:11:06.028 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:06.028 18:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:06.028 18:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.028 18:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:06.028 18:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.028 18:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.028 18:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:06.028 18:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:06.303 [ 00:11:06.303 { 00:11:06.303 "name": "BaseBdev1", 00:11:06.303 "aliases": [ 00:11:06.303 "1ded7847-c62b-4137-b333-f9c4e8662ca6" 00:11:06.303 ], 00:11:06.303 "product_name": "Malloc disk", 00:11:06.303 "block_size": 512, 00:11:06.303 "num_blocks": 65536, 00:11:06.303 "uuid": "1ded7847-c62b-4137-b333-f9c4e8662ca6", 00:11:06.303 "assigned_rate_limits": { 00:11:06.303 "rw_ios_per_sec": 0, 00:11:06.303 "rw_mbytes_per_sec": 0, 00:11:06.303 "r_mbytes_per_sec": 0, 00:11:06.303 "w_mbytes_per_sec": 0 00:11:06.303 }, 00:11:06.303 "claimed": true, 00:11:06.303 "claim_type": "exclusive_write", 00:11:06.303 "zoned": false, 00:11:06.303 "supported_io_types": { 00:11:06.303 "read": true, 00:11:06.303 "write": true, 00:11:06.303 "unmap": true, 00:11:06.303 "flush": true, 00:11:06.303 "reset": true, 00:11:06.303 "nvme_admin": false, 00:11:06.303 "nvme_io": false, 00:11:06.303 "nvme_io_md": false, 00:11:06.303 "write_zeroes": true, 00:11:06.303 "zcopy": true, 00:11:06.303 "get_zone_info": false, 00:11:06.303 "zone_management": false, 00:11:06.303 "zone_append": false, 00:11:06.303 "compare": false, 00:11:06.303 "compare_and_write": false, 00:11:06.303 "abort": true, 00:11:06.303 "seek_hole": false, 00:11:06.303 "seek_data": false, 00:11:06.303 "copy": true, 00:11:06.303 "nvme_iov_md": false 00:11:06.303 }, 00:11:06.303 "memory_domains": [ 00:11:06.303 { 00:11:06.303 "dma_device_id": "system", 00:11:06.303 "dma_device_type": 1 00:11:06.303 }, 00:11:06.303 { 00:11:06.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.303 "dma_device_type": 2 00:11:06.303 } 00:11:06.303 ], 00:11:06.303 "driver_specific": {} 00:11:06.303 } 00:11:06.303 ] 00:11:06.303 18:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:06.303 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:06.303 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:06.303 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:06.303 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:06.303 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:06.303 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:06.303 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:06.303 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:06.303 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:06.303 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:06.303 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.303 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.563 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:06.563 "name": "Existed_Raid", 00:11:06.563 "uuid": "322292bb-8c52-4302-bb65-82b095e2315e", 00:11:06.563 "strip_size_kb": 0, 00:11:06.563 "state": "configuring", 00:11:06.563 "raid_level": "raid1", 00:11:06.563 "superblock": true, 00:11:06.563 "num_base_bdevs": 2, 00:11:06.563 "num_base_bdevs_discovered": 1, 00:11:06.563 "num_base_bdevs_operational": 2, 00:11:06.563 "base_bdevs_list": [ 00:11:06.563 { 00:11:06.563 "name": "BaseBdev1", 00:11:06.563 "uuid": "1ded7847-c62b-4137-b333-f9c4e8662ca6", 00:11:06.563 "is_configured": true, 00:11:06.563 "data_offset": 2048, 00:11:06.563 "data_size": 63488 00:11:06.563 }, 00:11:06.563 { 00:11:06.563 "name": "BaseBdev2", 00:11:06.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.563 "is_configured": false, 00:11:06.563 "data_offset": 0, 00:11:06.563 "data_size": 0 00:11:06.563 } 00:11:06.563 ] 00:11:06.563 }' 00:11:06.563 18:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:06.563 18:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.131 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:07.131 [2024-07-24 18:14:15.591096] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:07.131 [2024-07-24 18:14:15.591125] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1359a90 name Existed_Raid, state configuring 00:11:07.131 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:07.390 [2024-07-24 18:14:15.755547] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.390 [2024-07-24 18:14:15.756637] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:07.390 [2024-07-24 18:14:15.756662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:07.390 "name": "Existed_Raid", 00:11:07.390 "uuid": "6f39c6ab-53cf-46e3-aede-6c6e0d00c1cc", 00:11:07.390 "strip_size_kb": 0, 00:11:07.390 "state": "configuring", 00:11:07.390 "raid_level": "raid1", 00:11:07.390 "superblock": true, 00:11:07.390 "num_base_bdevs": 2, 00:11:07.390 "num_base_bdevs_discovered": 1, 00:11:07.390 "num_base_bdevs_operational": 2, 00:11:07.390 "base_bdevs_list": [ 00:11:07.390 { 00:11:07.390 "name": "BaseBdev1", 00:11:07.390 "uuid": "1ded7847-c62b-4137-b333-f9c4e8662ca6", 00:11:07.390 "is_configured": true, 00:11:07.390 "data_offset": 2048, 00:11:07.390 "data_size": 63488 00:11:07.390 }, 00:11:07.390 { 00:11:07.390 "name": "BaseBdev2", 00:11:07.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.390 "is_configured": false, 00:11:07.390 "data_offset": 0, 00:11:07.390 "data_size": 0 00:11:07.390 } 00:11:07.390 ] 00:11:07.390 }' 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:07.390 18:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.958 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:08.217 [2024-07-24 18:14:16.608553] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.217 [2024-07-24 18:14:16.608680] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x135a880 00:11:08.217 [2024-07-24 18:14:16.608690] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:08.217 [2024-07-24 18:14:16.608810] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x150d9a0 00:11:08.217 [2024-07-24 18:14:16.608893] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x135a880 00:11:08.217 [2024-07-24 18:14:16.608901] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x135a880 00:11:08.217 [2024-07-24 18:14:16.608966] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.217 BaseBdev2 00:11:08.217 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:08.217 18:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:08.217 18:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:08.217 18:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:08.217 18:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:08.218 18:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:08.218 18:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:08.218 18:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:08.477 [ 00:11:08.477 { 00:11:08.477 "name": "BaseBdev2", 00:11:08.477 "aliases": [ 00:11:08.477 "09b62915-dd34-4e98-a181-b6a09b5b9448" 00:11:08.477 ], 00:11:08.477 "product_name": "Malloc disk", 00:11:08.477 "block_size": 512, 00:11:08.477 "num_blocks": 65536, 00:11:08.477 "uuid": "09b62915-dd34-4e98-a181-b6a09b5b9448", 00:11:08.477 "assigned_rate_limits": { 00:11:08.477 "rw_ios_per_sec": 0, 00:11:08.477 "rw_mbytes_per_sec": 0, 00:11:08.477 "r_mbytes_per_sec": 0, 00:11:08.477 "w_mbytes_per_sec": 0 00:11:08.477 }, 00:11:08.477 "claimed": true, 00:11:08.477 "claim_type": "exclusive_write", 00:11:08.477 "zoned": false, 00:11:08.477 "supported_io_types": { 00:11:08.477 "read": true, 00:11:08.477 "write": true, 00:11:08.477 "unmap": true, 00:11:08.477 "flush": true, 00:11:08.477 "reset": true, 00:11:08.477 "nvme_admin": false, 00:11:08.477 "nvme_io": false, 00:11:08.477 "nvme_io_md": false, 00:11:08.477 "write_zeroes": true, 00:11:08.477 "zcopy": true, 00:11:08.477 "get_zone_info": false, 00:11:08.477 "zone_management": false, 00:11:08.477 "zone_append": false, 00:11:08.477 "compare": false, 00:11:08.477 "compare_and_write": false, 00:11:08.477 "abort": true, 00:11:08.477 "seek_hole": false, 00:11:08.477 "seek_data": false, 00:11:08.477 "copy": true, 00:11:08.477 "nvme_iov_md": false 00:11:08.477 }, 00:11:08.477 "memory_domains": [ 00:11:08.477 { 00:11:08.477 "dma_device_id": "system", 00:11:08.477 "dma_device_type": 1 00:11:08.477 }, 00:11:08.477 { 00:11:08.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.477 "dma_device_type": 2 00:11:08.477 } 00:11:08.477 ], 00:11:08.477 "driver_specific": {} 00:11:08.477 } 00:11:08.477 ] 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.477 18:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.737 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:08.737 "name": "Existed_Raid", 00:11:08.737 "uuid": "6f39c6ab-53cf-46e3-aede-6c6e0d00c1cc", 00:11:08.737 "strip_size_kb": 0, 00:11:08.737 "state": "online", 00:11:08.737 "raid_level": "raid1", 00:11:08.737 "superblock": true, 00:11:08.737 "num_base_bdevs": 2, 00:11:08.737 "num_base_bdevs_discovered": 2, 00:11:08.737 "num_base_bdevs_operational": 2, 00:11:08.737 "base_bdevs_list": [ 00:11:08.737 { 00:11:08.737 "name": "BaseBdev1", 00:11:08.737 "uuid": "1ded7847-c62b-4137-b333-f9c4e8662ca6", 00:11:08.737 "is_configured": true, 00:11:08.737 "data_offset": 2048, 00:11:08.737 "data_size": 63488 00:11:08.737 }, 00:11:08.737 { 00:11:08.737 "name": "BaseBdev2", 00:11:08.737 "uuid": "09b62915-dd34-4e98-a181-b6a09b5b9448", 00:11:08.737 "is_configured": true, 00:11:08.737 "data_offset": 2048, 00:11:08.737 "data_size": 63488 00:11:08.737 } 00:11:08.737 ] 00:11:08.737 }' 00:11:08.737 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:08.737 18:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.306 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:09.306 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:09.306 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:09.306 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:09.306 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:09.306 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:09.306 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:09.306 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:09.306 [2024-07-24 18:14:17.751679] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.306 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:09.306 "name": "Existed_Raid", 00:11:09.306 "aliases": [ 00:11:09.306 "6f39c6ab-53cf-46e3-aede-6c6e0d00c1cc" 00:11:09.306 ], 00:11:09.306 "product_name": "Raid Volume", 00:11:09.306 "block_size": 512, 00:11:09.306 "num_blocks": 63488, 00:11:09.306 "uuid": "6f39c6ab-53cf-46e3-aede-6c6e0d00c1cc", 00:11:09.306 "assigned_rate_limits": { 00:11:09.306 "rw_ios_per_sec": 0, 00:11:09.306 "rw_mbytes_per_sec": 0, 00:11:09.306 "r_mbytes_per_sec": 0, 00:11:09.306 "w_mbytes_per_sec": 0 00:11:09.306 }, 00:11:09.306 "claimed": false, 00:11:09.306 "zoned": false, 00:11:09.306 "supported_io_types": { 00:11:09.306 "read": true, 00:11:09.306 "write": true, 00:11:09.306 "unmap": false, 00:11:09.306 "flush": false, 00:11:09.306 "reset": true, 00:11:09.306 "nvme_admin": false, 00:11:09.306 "nvme_io": false, 00:11:09.306 "nvme_io_md": false, 00:11:09.306 "write_zeroes": true, 00:11:09.306 "zcopy": false, 00:11:09.306 "get_zone_info": false, 00:11:09.306 "zone_management": false, 00:11:09.306 "zone_append": false, 00:11:09.306 "compare": false, 00:11:09.306 "compare_and_write": false, 00:11:09.306 "abort": false, 00:11:09.306 "seek_hole": false, 00:11:09.306 "seek_data": false, 00:11:09.306 "copy": false, 00:11:09.306 "nvme_iov_md": false 00:11:09.306 }, 00:11:09.306 "memory_domains": [ 00:11:09.306 { 00:11:09.306 "dma_device_id": "system", 00:11:09.306 "dma_device_type": 1 00:11:09.306 }, 00:11:09.306 { 00:11:09.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.306 "dma_device_type": 2 00:11:09.306 }, 00:11:09.306 { 00:11:09.306 "dma_device_id": "system", 00:11:09.306 "dma_device_type": 1 00:11:09.306 }, 00:11:09.306 { 00:11:09.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.306 "dma_device_type": 2 00:11:09.306 } 00:11:09.306 ], 00:11:09.306 "driver_specific": { 00:11:09.306 "raid": { 00:11:09.306 "uuid": "6f39c6ab-53cf-46e3-aede-6c6e0d00c1cc", 00:11:09.306 "strip_size_kb": 0, 00:11:09.306 "state": "online", 00:11:09.306 "raid_level": "raid1", 00:11:09.306 "superblock": true, 00:11:09.306 "num_base_bdevs": 2, 00:11:09.306 "num_base_bdevs_discovered": 2, 00:11:09.307 "num_base_bdevs_operational": 2, 00:11:09.307 "base_bdevs_list": [ 00:11:09.307 { 00:11:09.307 "name": "BaseBdev1", 00:11:09.307 "uuid": "1ded7847-c62b-4137-b333-f9c4e8662ca6", 00:11:09.307 "is_configured": true, 00:11:09.307 "data_offset": 2048, 00:11:09.307 "data_size": 63488 00:11:09.307 }, 00:11:09.307 { 00:11:09.307 "name": "BaseBdev2", 00:11:09.307 "uuid": "09b62915-dd34-4e98-a181-b6a09b5b9448", 00:11:09.307 "is_configured": true, 00:11:09.307 "data_offset": 2048, 00:11:09.307 "data_size": 63488 00:11:09.307 } 00:11:09.307 ] 00:11:09.307 } 00:11:09.307 } 00:11:09.307 }' 00:11:09.307 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.307 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:09.307 BaseBdev2' 00:11:09.307 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:09.307 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:09.307 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:09.566 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:09.567 "name": "BaseBdev1", 00:11:09.567 "aliases": [ 00:11:09.567 "1ded7847-c62b-4137-b333-f9c4e8662ca6" 00:11:09.567 ], 00:11:09.567 "product_name": "Malloc disk", 00:11:09.567 "block_size": 512, 00:11:09.567 "num_blocks": 65536, 00:11:09.567 "uuid": "1ded7847-c62b-4137-b333-f9c4e8662ca6", 00:11:09.567 "assigned_rate_limits": { 00:11:09.567 "rw_ios_per_sec": 0, 00:11:09.567 "rw_mbytes_per_sec": 0, 00:11:09.567 "r_mbytes_per_sec": 0, 00:11:09.567 "w_mbytes_per_sec": 0 00:11:09.567 }, 00:11:09.567 "claimed": true, 00:11:09.567 "claim_type": "exclusive_write", 00:11:09.567 "zoned": false, 00:11:09.567 "supported_io_types": { 00:11:09.567 "read": true, 00:11:09.567 "write": true, 00:11:09.567 "unmap": true, 00:11:09.567 "flush": true, 00:11:09.567 "reset": true, 00:11:09.567 "nvme_admin": false, 00:11:09.567 "nvme_io": false, 00:11:09.567 "nvme_io_md": false, 00:11:09.567 "write_zeroes": true, 00:11:09.567 "zcopy": true, 00:11:09.567 "get_zone_info": false, 00:11:09.567 "zone_management": false, 00:11:09.567 "zone_append": false, 00:11:09.567 "compare": false, 00:11:09.567 "compare_and_write": false, 00:11:09.567 "abort": true, 00:11:09.567 "seek_hole": false, 00:11:09.567 "seek_data": false, 00:11:09.567 "copy": true, 00:11:09.567 "nvme_iov_md": false 00:11:09.567 }, 00:11:09.567 "memory_domains": [ 00:11:09.567 { 00:11:09.567 "dma_device_id": "system", 00:11:09.567 "dma_device_type": 1 00:11:09.567 }, 00:11:09.567 { 00:11:09.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.567 "dma_device_type": 2 00:11:09.567 } 00:11:09.567 ], 00:11:09.567 "driver_specific": {} 00:11:09.567 }' 00:11:09.567 18:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:09.567 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:09.567 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:09.567 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:09.567 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:09.567 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:09.567 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:09.827 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:09.827 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:09.827 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:09.827 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:09.827 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:09.827 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:09.827 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:09.827 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:10.087 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:10.087 "name": "BaseBdev2", 00:11:10.087 "aliases": [ 00:11:10.087 "09b62915-dd34-4e98-a181-b6a09b5b9448" 00:11:10.087 ], 00:11:10.087 "product_name": "Malloc disk", 00:11:10.087 "block_size": 512, 00:11:10.087 "num_blocks": 65536, 00:11:10.087 "uuid": "09b62915-dd34-4e98-a181-b6a09b5b9448", 00:11:10.087 "assigned_rate_limits": { 00:11:10.087 "rw_ios_per_sec": 0, 00:11:10.087 "rw_mbytes_per_sec": 0, 00:11:10.087 "r_mbytes_per_sec": 0, 00:11:10.087 "w_mbytes_per_sec": 0 00:11:10.087 }, 00:11:10.087 "claimed": true, 00:11:10.087 "claim_type": "exclusive_write", 00:11:10.087 "zoned": false, 00:11:10.087 "supported_io_types": { 00:11:10.087 "read": true, 00:11:10.087 "write": true, 00:11:10.087 "unmap": true, 00:11:10.087 "flush": true, 00:11:10.087 "reset": true, 00:11:10.087 "nvme_admin": false, 00:11:10.087 "nvme_io": false, 00:11:10.087 "nvme_io_md": false, 00:11:10.087 "write_zeroes": true, 00:11:10.087 "zcopy": true, 00:11:10.087 "get_zone_info": false, 00:11:10.087 "zone_management": false, 00:11:10.087 "zone_append": false, 00:11:10.087 "compare": false, 00:11:10.087 "compare_and_write": false, 00:11:10.087 "abort": true, 00:11:10.087 "seek_hole": false, 00:11:10.087 "seek_data": false, 00:11:10.087 "copy": true, 00:11:10.087 "nvme_iov_md": false 00:11:10.087 }, 00:11:10.087 "memory_domains": [ 00:11:10.087 { 00:11:10.087 "dma_device_id": "system", 00:11:10.087 "dma_device_type": 1 00:11:10.087 }, 00:11:10.087 { 00:11:10.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.087 "dma_device_type": 2 00:11:10.087 } 00:11:10.087 ], 00:11:10.087 "driver_specific": {} 00:11:10.087 }' 00:11:10.087 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:10.087 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:10.087 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:10.087 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:10.087 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:10.087 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:10.087 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:10.087 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:10.347 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:10.347 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:10.347 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:10.347 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:10.347 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:10.347 [2024-07-24 18:14:18.934583] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:10.606 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:10.606 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:10.606 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:10.607 18:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.607 18:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:10.607 "name": "Existed_Raid", 00:11:10.607 "uuid": "6f39c6ab-53cf-46e3-aede-6c6e0d00c1cc", 00:11:10.607 "strip_size_kb": 0, 00:11:10.607 "state": "online", 00:11:10.607 "raid_level": "raid1", 00:11:10.607 "superblock": true, 00:11:10.607 "num_base_bdevs": 2, 00:11:10.607 "num_base_bdevs_discovered": 1, 00:11:10.607 "num_base_bdevs_operational": 1, 00:11:10.607 "base_bdevs_list": [ 00:11:10.607 { 00:11:10.607 "name": null, 00:11:10.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.607 "is_configured": false, 00:11:10.607 "data_offset": 2048, 00:11:10.607 "data_size": 63488 00:11:10.607 }, 00:11:10.607 { 00:11:10.607 "name": "BaseBdev2", 00:11:10.607 "uuid": "09b62915-dd34-4e98-a181-b6a09b5b9448", 00:11:10.607 "is_configured": true, 00:11:10.607 "data_offset": 2048, 00:11:10.607 "data_size": 63488 00:11:10.607 } 00:11:10.607 ] 00:11:10.607 }' 00:11:10.607 18:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:10.607 18:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.175 18:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:11.175 18:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:11.175 18:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:11.175 18:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.476 18:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:11.476 18:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:11.476 18:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:11.476 [2024-07-24 18:14:19.950012] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:11.476 [2024-07-24 18:14:19.950075] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.476 [2024-07-24 18:14:19.960022] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.476 [2024-07-24 18:14:19.960063] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.476 [2024-07-24 18:14:19.960071] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x135a880 name Existed_Raid, state offline 00:11:11.476 18:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:11.476 18:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:11.477 18:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.477 18:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2165398 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2165398 ']' 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2165398 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2165398 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2165398' 00:11:11.736 killing process with pid 2165398 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2165398 00:11:11.736 [2024-07-24 18:14:20.209356] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.736 18:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2165398 00:11:11.736 [2024-07-24 18:14:20.210153] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.996 18:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:11.996 00:11:11.996 real 0m8.129s 00:11:11.996 user 0m14.366s 00:11:11.996 sys 0m1.565s 00:11:11.996 18:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.996 18:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.996 ************************************ 00:11:11.996 END TEST raid_state_function_test_sb 00:11:11.996 ************************************ 00:11:11.996 18:14:20 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:11.996 18:14:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:11.996 18:14:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.996 18:14:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.996 ************************************ 00:11:11.996 START TEST raid_superblock_test 00:11:11.996 ************************************ 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2167032 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2167032 /var/tmp/spdk-raid.sock 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2167032 ']' 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:11.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.996 18:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.996 [2024-07-24 18:14:20.520366] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:11:11.996 [2024-07-24 18:14:20.520409] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167032 ] 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:01.0 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:01.1 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:01.2 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:01.3 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:01.4 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:01.5 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:01.6 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:01.7 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:02.0 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:02.1 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:02.2 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:02.3 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:02.4 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:02.5 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:02.6 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b3:02.7 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:01.0 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:01.1 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:01.2 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:01.3 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:01.4 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:01.5 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:01.6 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:01.7 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:02.0 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:02.1 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:02.2 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:02.3 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:02.4 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:02.5 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:02.6 cannot be used 00:11:11.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:11.996 EAL: Requested device 0000:b5:02.7 cannot be used 00:11:12.256 [2024-07-24 18:14:20.613967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.256 [2024-07-24 18:14:20.687198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.256 [2024-07-24 18:14:20.736975] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.256 [2024-07-24 18:14:20.736999] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.825 18:14:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.825 18:14:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:12.825 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:12.825 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:12.825 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:12.825 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:12.825 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:12.825 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:12.825 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:12.825 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:12.825 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:13.084 malloc1 00:11:13.084 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:13.084 [2024-07-24 18:14:21.656819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:13.084 [2024-07-24 18:14:21.656854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.084 [2024-07-24 18:14:21.656867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bc3cb0 00:11:13.084 [2024-07-24 18:14:21.656892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.084 [2024-07-24 18:14:21.658013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.084 [2024-07-24 18:14:21.658035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:13.084 pt1 00:11:13.084 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:13.084 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:13.084 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:13.084 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:13.084 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:13.084 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:13.084 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:13.084 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:13.084 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:13.344 malloc2 00:11:13.344 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:13.603 [2024-07-24 18:14:21.973384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:13.603 [2024-07-24 18:14:21.973415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.603 [2024-07-24 18:14:21.973425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bc50b0 00:11:13.603 [2024-07-24 18:14:21.973433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.603 [2024-07-24 18:14:21.974476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.603 [2024-07-24 18:14:21.974497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:13.603 pt2 00:11:13.603 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:13.603 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:13.604 18:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:11:13.604 [2024-07-24 18:14:22.129803] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:13.604 [2024-07-24 18:14:22.130603] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:13.604 [2024-07-24 18:14:22.130709] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d679b0 00:11:13.604 [2024-07-24 18:14:22.130718] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:13.604 [2024-07-24 18:14:22.130851] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bbb820 00:11:13.604 [2024-07-24 18:14:22.130944] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d679b0 00:11:13.604 [2024-07-24 18:14:22.130950] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d679b0 00:11:13.604 [2024-07-24 18:14:22.131010] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.604 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:13.604 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:13.604 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:13.604 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:13.604 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:13.604 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:13.604 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:13.604 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:13.604 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:13.604 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:13.604 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.604 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.864 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:13.864 "name": "raid_bdev1", 00:11:13.864 "uuid": "30a78787-5d93-4844-bd6e-7f83d16b1d38", 00:11:13.864 "strip_size_kb": 0, 00:11:13.864 "state": "online", 00:11:13.864 "raid_level": "raid1", 00:11:13.864 "superblock": true, 00:11:13.864 "num_base_bdevs": 2, 00:11:13.864 "num_base_bdevs_discovered": 2, 00:11:13.864 "num_base_bdevs_operational": 2, 00:11:13.864 "base_bdevs_list": [ 00:11:13.864 { 00:11:13.864 "name": "pt1", 00:11:13.864 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:13.864 "is_configured": true, 00:11:13.864 "data_offset": 2048, 00:11:13.864 "data_size": 63488 00:11:13.864 }, 00:11:13.864 { 00:11:13.864 "name": "pt2", 00:11:13.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:13.864 "is_configured": true, 00:11:13.864 "data_offset": 2048, 00:11:13.864 "data_size": 63488 00:11:13.864 } 00:11:13.864 ] 00:11:13.864 }' 00:11:13.864 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:13.864 18:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.433 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:14.433 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:14.433 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:14.433 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:14.433 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:14.433 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:14.433 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:14.433 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:14.433 [2024-07-24 18:14:22.923988] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.433 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:14.433 "name": "raid_bdev1", 00:11:14.433 "aliases": [ 00:11:14.433 "30a78787-5d93-4844-bd6e-7f83d16b1d38" 00:11:14.433 ], 00:11:14.433 "product_name": "Raid Volume", 00:11:14.433 "block_size": 512, 00:11:14.433 "num_blocks": 63488, 00:11:14.433 "uuid": "30a78787-5d93-4844-bd6e-7f83d16b1d38", 00:11:14.433 "assigned_rate_limits": { 00:11:14.433 "rw_ios_per_sec": 0, 00:11:14.433 "rw_mbytes_per_sec": 0, 00:11:14.433 "r_mbytes_per_sec": 0, 00:11:14.433 "w_mbytes_per_sec": 0 00:11:14.433 }, 00:11:14.433 "claimed": false, 00:11:14.433 "zoned": false, 00:11:14.433 "supported_io_types": { 00:11:14.433 "read": true, 00:11:14.433 "write": true, 00:11:14.433 "unmap": false, 00:11:14.433 "flush": false, 00:11:14.433 "reset": true, 00:11:14.433 "nvme_admin": false, 00:11:14.433 "nvme_io": false, 00:11:14.433 "nvme_io_md": false, 00:11:14.433 "write_zeroes": true, 00:11:14.433 "zcopy": false, 00:11:14.433 "get_zone_info": false, 00:11:14.433 "zone_management": false, 00:11:14.433 "zone_append": false, 00:11:14.433 "compare": false, 00:11:14.433 "compare_and_write": false, 00:11:14.433 "abort": false, 00:11:14.433 "seek_hole": false, 00:11:14.433 "seek_data": false, 00:11:14.433 "copy": false, 00:11:14.433 "nvme_iov_md": false 00:11:14.433 }, 00:11:14.433 "memory_domains": [ 00:11:14.433 { 00:11:14.433 "dma_device_id": "system", 00:11:14.433 "dma_device_type": 1 00:11:14.433 }, 00:11:14.433 { 00:11:14.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.433 "dma_device_type": 2 00:11:14.433 }, 00:11:14.433 { 00:11:14.433 "dma_device_id": "system", 00:11:14.433 "dma_device_type": 1 00:11:14.433 }, 00:11:14.433 { 00:11:14.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.433 "dma_device_type": 2 00:11:14.433 } 00:11:14.433 ], 00:11:14.433 "driver_specific": { 00:11:14.433 "raid": { 00:11:14.433 "uuid": "30a78787-5d93-4844-bd6e-7f83d16b1d38", 00:11:14.433 "strip_size_kb": 0, 00:11:14.433 "state": "online", 00:11:14.433 "raid_level": "raid1", 00:11:14.433 "superblock": true, 00:11:14.433 "num_base_bdevs": 2, 00:11:14.433 "num_base_bdevs_discovered": 2, 00:11:14.433 "num_base_bdevs_operational": 2, 00:11:14.433 "base_bdevs_list": [ 00:11:14.433 { 00:11:14.433 "name": "pt1", 00:11:14.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.433 "is_configured": true, 00:11:14.433 "data_offset": 2048, 00:11:14.433 "data_size": 63488 00:11:14.433 }, 00:11:14.433 { 00:11:14.433 "name": "pt2", 00:11:14.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.433 "is_configured": true, 00:11:14.433 "data_offset": 2048, 00:11:14.433 "data_size": 63488 00:11:14.433 } 00:11:14.433 ] 00:11:14.433 } 00:11:14.433 } 00:11:14.433 }' 00:11:14.433 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:14.433 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:14.433 pt2' 00:11:14.433 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:14.434 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:14.434 18:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:14.693 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:14.693 "name": "pt1", 00:11:14.693 "aliases": [ 00:11:14.693 "00000000-0000-0000-0000-000000000001" 00:11:14.693 ], 00:11:14.693 "product_name": "passthru", 00:11:14.693 "block_size": 512, 00:11:14.693 "num_blocks": 65536, 00:11:14.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.693 "assigned_rate_limits": { 00:11:14.693 "rw_ios_per_sec": 0, 00:11:14.693 "rw_mbytes_per_sec": 0, 00:11:14.693 "r_mbytes_per_sec": 0, 00:11:14.693 "w_mbytes_per_sec": 0 00:11:14.693 }, 00:11:14.693 "claimed": true, 00:11:14.693 "claim_type": "exclusive_write", 00:11:14.693 "zoned": false, 00:11:14.693 "supported_io_types": { 00:11:14.693 "read": true, 00:11:14.693 "write": true, 00:11:14.693 "unmap": true, 00:11:14.693 "flush": true, 00:11:14.693 "reset": true, 00:11:14.693 "nvme_admin": false, 00:11:14.693 "nvme_io": false, 00:11:14.693 "nvme_io_md": false, 00:11:14.693 "write_zeroes": true, 00:11:14.693 "zcopy": true, 00:11:14.693 "get_zone_info": false, 00:11:14.693 "zone_management": false, 00:11:14.693 "zone_append": false, 00:11:14.693 "compare": false, 00:11:14.693 "compare_and_write": false, 00:11:14.693 "abort": true, 00:11:14.693 "seek_hole": false, 00:11:14.693 "seek_data": false, 00:11:14.693 "copy": true, 00:11:14.693 "nvme_iov_md": false 00:11:14.693 }, 00:11:14.693 "memory_domains": [ 00:11:14.693 { 00:11:14.693 "dma_device_id": "system", 00:11:14.693 "dma_device_type": 1 00:11:14.693 }, 00:11:14.693 { 00:11:14.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.693 "dma_device_type": 2 00:11:14.693 } 00:11:14.693 ], 00:11:14.693 "driver_specific": { 00:11:14.693 "passthru": { 00:11:14.693 "name": "pt1", 00:11:14.693 "base_bdev_name": "malloc1" 00:11:14.693 } 00:11:14.693 } 00:11:14.693 }' 00:11:14.693 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.693 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.693 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:14.693 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.693 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.952 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:14.952 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.952 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.952 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:14.952 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.952 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.952 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:14.952 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:14.952 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:14.952 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:15.211 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:15.211 "name": "pt2", 00:11:15.211 "aliases": [ 00:11:15.211 "00000000-0000-0000-0000-000000000002" 00:11:15.211 ], 00:11:15.211 "product_name": "passthru", 00:11:15.211 "block_size": 512, 00:11:15.212 "num_blocks": 65536, 00:11:15.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.212 "assigned_rate_limits": { 00:11:15.212 "rw_ios_per_sec": 0, 00:11:15.212 "rw_mbytes_per_sec": 0, 00:11:15.212 "r_mbytes_per_sec": 0, 00:11:15.212 "w_mbytes_per_sec": 0 00:11:15.212 }, 00:11:15.212 "claimed": true, 00:11:15.212 "claim_type": "exclusive_write", 00:11:15.212 "zoned": false, 00:11:15.212 "supported_io_types": { 00:11:15.212 "read": true, 00:11:15.212 "write": true, 00:11:15.212 "unmap": true, 00:11:15.212 "flush": true, 00:11:15.212 "reset": true, 00:11:15.212 "nvme_admin": false, 00:11:15.212 "nvme_io": false, 00:11:15.212 "nvme_io_md": false, 00:11:15.212 "write_zeroes": true, 00:11:15.212 "zcopy": true, 00:11:15.212 "get_zone_info": false, 00:11:15.212 "zone_management": false, 00:11:15.212 "zone_append": false, 00:11:15.212 "compare": false, 00:11:15.212 "compare_and_write": false, 00:11:15.212 "abort": true, 00:11:15.212 "seek_hole": false, 00:11:15.212 "seek_data": false, 00:11:15.212 "copy": true, 00:11:15.212 "nvme_iov_md": false 00:11:15.212 }, 00:11:15.212 "memory_domains": [ 00:11:15.212 { 00:11:15.212 "dma_device_id": "system", 00:11:15.212 "dma_device_type": 1 00:11:15.212 }, 00:11:15.212 { 00:11:15.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.212 "dma_device_type": 2 00:11:15.212 } 00:11:15.212 ], 00:11:15.212 "driver_specific": { 00:11:15.212 "passthru": { 00:11:15.212 "name": "pt2", 00:11:15.212 "base_bdev_name": "malloc2" 00:11:15.212 } 00:11:15.212 } 00:11:15.212 }' 00:11:15.212 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:15.212 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:15.212 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:15.212 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:15.212 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:15.212 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:15.212 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:15.471 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:15.471 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:15.471 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:15.471 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:15.471 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:15.471 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:15.471 18:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:15.731 [2024-07-24 18:14:24.070916] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.731 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=30a78787-5d93-4844-bd6e-7f83d16b1d38 00:11:15.731 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 30a78787-5d93-4844-bd6e-7f83d16b1d38 ']' 00:11:15.731 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:15.731 [2024-07-24 18:14:24.239203] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.731 [2024-07-24 18:14:24.239216] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.731 [2024-07-24 18:14:24.239256] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.731 [2024-07-24 18:14:24.239295] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.731 [2024-07-24 18:14:24.239302] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d679b0 name raid_bdev1, state offline 00:11:15.731 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:11:15.731 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.990 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:11:15.990 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:11:15.990 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:15.990 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:15.990 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:15.990 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:16.249 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:16.249 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:16.509 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:11:16.509 18:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:16.509 18:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:16.509 18:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:16.509 18:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:16.509 18:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.509 18:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:16.509 18:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.510 18:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:16.510 18:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.510 18:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:16.510 18:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:16.510 18:14:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:16.510 [2024-07-24 18:14:25.085369] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:16.510 [2024-07-24 18:14:25.086343] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:16.510 [2024-07-24 18:14:25.086384] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:16.510 [2024-07-24 18:14:25.086413] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:16.510 [2024-07-24 18:14:25.086426] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.510 [2024-07-24 18:14:25.086449] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d67730 name raid_bdev1, state configuring 00:11:16.510 request: 00:11:16.510 { 00:11:16.510 "name": "raid_bdev1", 00:11:16.510 "raid_level": "raid1", 00:11:16.510 "base_bdevs": [ 00:11:16.510 "malloc1", 00:11:16.510 "malloc2" 00:11:16.510 ], 00:11:16.510 "superblock": false, 00:11:16.510 "method": "bdev_raid_create", 00:11:16.510 "req_id": 1 00:11:16.510 } 00:11:16.510 Got JSON-RPC error response 00:11:16.510 response: 00:11:16.510 { 00:11:16.510 "code": -17, 00:11:16.510 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:16.510 } 00:11:16.510 18:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:16.510 18:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:16.510 18:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:16.510 18:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:16.770 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.770 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:11:16.770 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:11:16.770 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:11:16.770 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:17.028 [2024-07-24 18:14:25.414186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:17.028 [2024-07-24 18:14:25.414211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.028 [2024-07-24 18:14:25.414225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bc3ee0 00:11:17.028 [2024-07-24 18:14:25.414249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.028 [2024-07-24 18:14:25.415412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.028 [2024-07-24 18:14:25.415433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:17.028 [2024-07-24 18:14:25.415477] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:17.028 [2024-07-24 18:14:25.415494] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:17.028 pt1 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:17.028 "name": "raid_bdev1", 00:11:17.028 "uuid": "30a78787-5d93-4844-bd6e-7f83d16b1d38", 00:11:17.028 "strip_size_kb": 0, 00:11:17.028 "state": "configuring", 00:11:17.028 "raid_level": "raid1", 00:11:17.028 "superblock": true, 00:11:17.028 "num_base_bdevs": 2, 00:11:17.028 "num_base_bdevs_discovered": 1, 00:11:17.028 "num_base_bdevs_operational": 2, 00:11:17.028 "base_bdevs_list": [ 00:11:17.028 { 00:11:17.028 "name": "pt1", 00:11:17.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.028 "is_configured": true, 00:11:17.028 "data_offset": 2048, 00:11:17.028 "data_size": 63488 00:11:17.028 }, 00:11:17.028 { 00:11:17.028 "name": null, 00:11:17.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.028 "is_configured": false, 00:11:17.028 "data_offset": 2048, 00:11:17.028 "data_size": 63488 00:11:17.028 } 00:11:17.028 ] 00:11:17.028 }' 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:17.028 18:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.595 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:11:17.595 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:11:17.595 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:17.595 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:17.855 [2024-07-24 18:14:26.252349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:17.855 [2024-07-24 18:14:26.252383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.855 [2024-07-24 18:14:26.252395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d5be30 00:11:17.855 [2024-07-24 18:14:26.252408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.855 [2024-07-24 18:14:26.252667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.855 [2024-07-24 18:14:26.252680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:17.855 [2024-07-24 18:14:26.252723] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:17.855 [2024-07-24 18:14:26.252736] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.855 [2024-07-24 18:14:26.252804] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d5ccd0 00:11:17.855 [2024-07-24 18:14:26.252811] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:17.855 [2024-07-24 18:14:26.252924] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bbdc80 00:11:17.855 [2024-07-24 18:14:26.253011] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d5ccd0 00:11:17.855 [2024-07-24 18:14:26.253018] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d5ccd0 00:11:17.855 [2024-07-24 18:14:26.253083] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.855 pt2 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:17.855 "name": "raid_bdev1", 00:11:17.855 "uuid": "30a78787-5d93-4844-bd6e-7f83d16b1d38", 00:11:17.855 "strip_size_kb": 0, 00:11:17.855 "state": "online", 00:11:17.855 "raid_level": "raid1", 00:11:17.855 "superblock": true, 00:11:17.855 "num_base_bdevs": 2, 00:11:17.855 "num_base_bdevs_discovered": 2, 00:11:17.855 "num_base_bdevs_operational": 2, 00:11:17.855 "base_bdevs_list": [ 00:11:17.855 { 00:11:17.855 "name": "pt1", 00:11:17.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.855 "is_configured": true, 00:11:17.855 "data_offset": 2048, 00:11:17.855 "data_size": 63488 00:11:17.855 }, 00:11:17.855 { 00:11:17.855 "name": "pt2", 00:11:17.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.855 "is_configured": true, 00:11:17.855 "data_offset": 2048, 00:11:17.855 "data_size": 63488 00:11:17.855 } 00:11:17.855 ] 00:11:17.855 }' 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:17.855 18:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.423 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:11:18.423 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:18.423 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:18.423 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:18.423 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:18.423 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:18.423 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:18.423 18:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:18.683 [2024-07-24 18:14:27.082645] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.683 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:18.683 "name": "raid_bdev1", 00:11:18.683 "aliases": [ 00:11:18.683 "30a78787-5d93-4844-bd6e-7f83d16b1d38" 00:11:18.683 ], 00:11:18.683 "product_name": "Raid Volume", 00:11:18.683 "block_size": 512, 00:11:18.683 "num_blocks": 63488, 00:11:18.683 "uuid": "30a78787-5d93-4844-bd6e-7f83d16b1d38", 00:11:18.683 "assigned_rate_limits": { 00:11:18.683 "rw_ios_per_sec": 0, 00:11:18.683 "rw_mbytes_per_sec": 0, 00:11:18.683 "r_mbytes_per_sec": 0, 00:11:18.683 "w_mbytes_per_sec": 0 00:11:18.683 }, 00:11:18.683 "claimed": false, 00:11:18.683 "zoned": false, 00:11:18.683 "supported_io_types": { 00:11:18.683 "read": true, 00:11:18.683 "write": true, 00:11:18.683 "unmap": false, 00:11:18.683 "flush": false, 00:11:18.683 "reset": true, 00:11:18.683 "nvme_admin": false, 00:11:18.683 "nvme_io": false, 00:11:18.683 "nvme_io_md": false, 00:11:18.683 "write_zeroes": true, 00:11:18.683 "zcopy": false, 00:11:18.683 "get_zone_info": false, 00:11:18.683 "zone_management": false, 00:11:18.683 "zone_append": false, 00:11:18.683 "compare": false, 00:11:18.683 "compare_and_write": false, 00:11:18.683 "abort": false, 00:11:18.683 "seek_hole": false, 00:11:18.683 "seek_data": false, 00:11:18.683 "copy": false, 00:11:18.683 "nvme_iov_md": false 00:11:18.683 }, 00:11:18.683 "memory_domains": [ 00:11:18.683 { 00:11:18.683 "dma_device_id": "system", 00:11:18.683 "dma_device_type": 1 00:11:18.683 }, 00:11:18.683 { 00:11:18.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.683 "dma_device_type": 2 00:11:18.683 }, 00:11:18.683 { 00:11:18.683 "dma_device_id": "system", 00:11:18.683 "dma_device_type": 1 00:11:18.683 }, 00:11:18.683 { 00:11:18.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.683 "dma_device_type": 2 00:11:18.683 } 00:11:18.683 ], 00:11:18.683 "driver_specific": { 00:11:18.683 "raid": { 00:11:18.683 "uuid": "30a78787-5d93-4844-bd6e-7f83d16b1d38", 00:11:18.683 "strip_size_kb": 0, 00:11:18.683 "state": "online", 00:11:18.683 "raid_level": "raid1", 00:11:18.683 "superblock": true, 00:11:18.683 "num_base_bdevs": 2, 00:11:18.683 "num_base_bdevs_discovered": 2, 00:11:18.683 "num_base_bdevs_operational": 2, 00:11:18.683 "base_bdevs_list": [ 00:11:18.683 { 00:11:18.683 "name": "pt1", 00:11:18.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.683 "is_configured": true, 00:11:18.683 "data_offset": 2048, 00:11:18.683 "data_size": 63488 00:11:18.683 }, 00:11:18.683 { 00:11:18.683 "name": "pt2", 00:11:18.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.683 "is_configured": true, 00:11:18.683 "data_offset": 2048, 00:11:18.683 "data_size": 63488 00:11:18.683 } 00:11:18.683 ] 00:11:18.683 } 00:11:18.683 } 00:11:18.683 }' 00:11:18.683 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.683 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:18.683 pt2' 00:11:18.683 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:18.683 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:18.683 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:18.942 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:18.942 "name": "pt1", 00:11:18.942 "aliases": [ 00:11:18.942 "00000000-0000-0000-0000-000000000001" 00:11:18.942 ], 00:11:18.942 "product_name": "passthru", 00:11:18.942 "block_size": 512, 00:11:18.942 "num_blocks": 65536, 00:11:18.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.942 "assigned_rate_limits": { 00:11:18.942 "rw_ios_per_sec": 0, 00:11:18.942 "rw_mbytes_per_sec": 0, 00:11:18.942 "r_mbytes_per_sec": 0, 00:11:18.942 "w_mbytes_per_sec": 0 00:11:18.942 }, 00:11:18.942 "claimed": true, 00:11:18.942 "claim_type": "exclusive_write", 00:11:18.942 "zoned": false, 00:11:18.942 "supported_io_types": { 00:11:18.942 "read": true, 00:11:18.942 "write": true, 00:11:18.942 "unmap": true, 00:11:18.942 "flush": true, 00:11:18.942 "reset": true, 00:11:18.942 "nvme_admin": false, 00:11:18.942 "nvme_io": false, 00:11:18.942 "nvme_io_md": false, 00:11:18.942 "write_zeroes": true, 00:11:18.942 "zcopy": true, 00:11:18.942 "get_zone_info": false, 00:11:18.942 "zone_management": false, 00:11:18.942 "zone_append": false, 00:11:18.942 "compare": false, 00:11:18.942 "compare_and_write": false, 00:11:18.942 "abort": true, 00:11:18.942 "seek_hole": false, 00:11:18.942 "seek_data": false, 00:11:18.942 "copy": true, 00:11:18.942 "nvme_iov_md": false 00:11:18.942 }, 00:11:18.942 "memory_domains": [ 00:11:18.942 { 00:11:18.942 "dma_device_id": "system", 00:11:18.942 "dma_device_type": 1 00:11:18.942 }, 00:11:18.942 { 00:11:18.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.942 "dma_device_type": 2 00:11:18.942 } 00:11:18.942 ], 00:11:18.942 "driver_specific": { 00:11:18.942 "passthru": { 00:11:18.942 "name": "pt1", 00:11:18.942 "base_bdev_name": "malloc1" 00:11:18.942 } 00:11:18.942 } 00:11:18.942 }' 00:11:18.942 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:18.942 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:18.942 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:18.942 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:18.942 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:18.942 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:18.942 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:18.942 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:19.201 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:19.201 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:19.201 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:19.201 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:19.201 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:19.201 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:19.201 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:19.460 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:19.460 "name": "pt2", 00:11:19.460 "aliases": [ 00:11:19.460 "00000000-0000-0000-0000-000000000002" 00:11:19.460 ], 00:11:19.460 "product_name": "passthru", 00:11:19.460 "block_size": 512, 00:11:19.460 "num_blocks": 65536, 00:11:19.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.460 "assigned_rate_limits": { 00:11:19.460 "rw_ios_per_sec": 0, 00:11:19.460 "rw_mbytes_per_sec": 0, 00:11:19.460 "r_mbytes_per_sec": 0, 00:11:19.460 "w_mbytes_per_sec": 0 00:11:19.460 }, 00:11:19.460 "claimed": true, 00:11:19.460 "claim_type": "exclusive_write", 00:11:19.460 "zoned": false, 00:11:19.460 "supported_io_types": { 00:11:19.460 "read": true, 00:11:19.460 "write": true, 00:11:19.460 "unmap": true, 00:11:19.460 "flush": true, 00:11:19.460 "reset": true, 00:11:19.460 "nvme_admin": false, 00:11:19.460 "nvme_io": false, 00:11:19.460 "nvme_io_md": false, 00:11:19.460 "write_zeroes": true, 00:11:19.460 "zcopy": true, 00:11:19.460 "get_zone_info": false, 00:11:19.460 "zone_management": false, 00:11:19.460 "zone_append": false, 00:11:19.460 "compare": false, 00:11:19.460 "compare_and_write": false, 00:11:19.460 "abort": true, 00:11:19.460 "seek_hole": false, 00:11:19.460 "seek_data": false, 00:11:19.460 "copy": true, 00:11:19.460 "nvme_iov_md": false 00:11:19.460 }, 00:11:19.460 "memory_domains": [ 00:11:19.460 { 00:11:19.460 "dma_device_id": "system", 00:11:19.460 "dma_device_type": 1 00:11:19.460 }, 00:11:19.460 { 00:11:19.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.460 "dma_device_type": 2 00:11:19.460 } 00:11:19.460 ], 00:11:19.460 "driver_specific": { 00:11:19.460 "passthru": { 00:11:19.460 "name": "pt2", 00:11:19.460 "base_bdev_name": "malloc2" 00:11:19.460 } 00:11:19.460 } 00:11:19.460 }' 00:11:19.460 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:19.460 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:19.460 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:19.460 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:19.460 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:19.460 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:19.460 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:19.460 18:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:19.460 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:19.460 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:19.460 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:19.719 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:19.719 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:19.719 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:19.719 [2024-07-24 18:14:28.241631] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.719 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 30a78787-5d93-4844-bd6e-7f83d16b1d38 '!=' 30a78787-5d93-4844-bd6e-7f83d16b1d38 ']' 00:11:19.719 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:11:19.719 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:19.719 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:19.719 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:19.976 [2024-07-24 18:14:28.413947] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:19.976 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:19.976 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:19.976 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:19.976 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:19.976 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:19.976 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:19.976 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:19.976 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:19.977 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:19.977 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:19.977 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.977 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.235 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:20.235 "name": "raid_bdev1", 00:11:20.235 "uuid": "30a78787-5d93-4844-bd6e-7f83d16b1d38", 00:11:20.235 "strip_size_kb": 0, 00:11:20.235 "state": "online", 00:11:20.235 "raid_level": "raid1", 00:11:20.235 "superblock": true, 00:11:20.235 "num_base_bdevs": 2, 00:11:20.235 "num_base_bdevs_discovered": 1, 00:11:20.235 "num_base_bdevs_operational": 1, 00:11:20.235 "base_bdevs_list": [ 00:11:20.235 { 00:11:20.235 "name": null, 00:11:20.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.235 "is_configured": false, 00:11:20.235 "data_offset": 2048, 00:11:20.235 "data_size": 63488 00:11:20.235 }, 00:11:20.235 { 00:11:20.235 "name": "pt2", 00:11:20.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.235 "is_configured": true, 00:11:20.235 "data_offset": 2048, 00:11:20.235 "data_size": 63488 00:11:20.235 } 00:11:20.235 ] 00:11:20.235 }' 00:11:20.235 18:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:20.235 18:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.493 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:20.752 [2024-07-24 18:14:29.232054] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.752 [2024-07-24 18:14:29.232072] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.752 [2024-07-24 18:14:29.232112] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.752 [2024-07-24 18:14:29.232142] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.752 [2024-07-24 18:14:29.232149] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d5ccd0 name raid_bdev1, state offline 00:11:20.752 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.752 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:11:21.010 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:11:21.010 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:11:21.010 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:11:21.010 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:11:21.010 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:21.010 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:11:21.010 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:11:21.010 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:11:21.010 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:11:21.010 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:11:21.010 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:21.267 [2024-07-24 18:14:29.749373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:21.268 [2024-07-24 18:14:29.749406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.268 [2024-07-24 18:14:29.749417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bc48a0 00:11:21.268 [2024-07-24 18:14:29.749442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.268 [2024-07-24 18:14:29.750602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.268 [2024-07-24 18:14:29.750624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:21.268 [2024-07-24 18:14:29.750677] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:21.268 [2024-07-24 18:14:29.750696] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:21.268 [2024-07-24 18:14:29.750756] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bbaac0 00:11:21.268 [2024-07-24 18:14:29.750763] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:21.268 [2024-07-24 18:14:29.750880] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bbc1c0 00:11:21.268 [2024-07-24 18:14:29.750960] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bbaac0 00:11:21.268 [2024-07-24 18:14:29.750967] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1bbaac0 00:11:21.268 [2024-07-24 18:14:29.751030] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.268 pt2 00:11:21.268 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:21.268 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:21.268 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:21.268 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:21.268 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:21.268 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:21.268 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:21.268 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:21.268 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:21.268 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:21.268 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.268 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.527 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:21.527 "name": "raid_bdev1", 00:11:21.527 "uuid": "30a78787-5d93-4844-bd6e-7f83d16b1d38", 00:11:21.527 "strip_size_kb": 0, 00:11:21.527 "state": "online", 00:11:21.527 "raid_level": "raid1", 00:11:21.527 "superblock": true, 00:11:21.527 "num_base_bdevs": 2, 00:11:21.527 "num_base_bdevs_discovered": 1, 00:11:21.527 "num_base_bdevs_operational": 1, 00:11:21.527 "base_bdevs_list": [ 00:11:21.527 { 00:11:21.527 "name": null, 00:11:21.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.527 "is_configured": false, 00:11:21.527 "data_offset": 2048, 00:11:21.527 "data_size": 63488 00:11:21.527 }, 00:11:21.527 { 00:11:21.527 "name": "pt2", 00:11:21.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.527 "is_configured": true, 00:11:21.527 "data_offset": 2048, 00:11:21.527 "data_size": 63488 00:11:21.527 } 00:11:21.527 ] 00:11:21.527 }' 00:11:21.527 18:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:21.527 18:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.096 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:22.096 [2024-07-24 18:14:30.587519] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.096 [2024-07-24 18:14:30.587537] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.096 [2024-07-24 18:14:30.587575] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.096 [2024-07-24 18:14:30.587604] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.096 [2024-07-24 18:14:30.587611] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bbaac0 name raid_bdev1, state offline 00:11:22.096 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:11:22.096 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.354 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:11:22.354 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:11:22.354 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:22.355 [2024-07-24 18:14:30.916357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:22.355 [2024-07-24 18:14:30.916388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.355 [2024-07-24 18:14:30.916399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d66c60 00:11:22.355 [2024-07-24 18:14:30.916424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.355 [2024-07-24 18:14:30.917574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.355 [2024-07-24 18:14:30.917595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:22.355 [2024-07-24 18:14:30.917651] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:22.355 [2024-07-24 18:14:30.917670] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:22.355 [2024-07-24 18:14:30.917736] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:22.355 [2024-07-24 18:14:30.917745] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.355 [2024-07-24 18:14:30.917754] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bbbb30 name raid_bdev1, state configuring 00:11:22.355 [2024-07-24 18:14:30.917769] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:22.355 [2024-07-24 18:14:30.917810] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bbd9f0 00:11:22.355 [2024-07-24 18:14:30.917817] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:22.355 [2024-07-24 18:14:30.917931] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bbaa90 00:11:22.355 [2024-07-24 18:14:30.918013] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bbd9f0 00:11:22.355 [2024-07-24 18:14:30.918020] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1bbd9f0 00:11:22.355 [2024-07-24 18:14:30.918084] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.355 pt1 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.355 18:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.613 18:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:22.613 "name": "raid_bdev1", 00:11:22.613 "uuid": "30a78787-5d93-4844-bd6e-7f83d16b1d38", 00:11:22.613 "strip_size_kb": 0, 00:11:22.613 "state": "online", 00:11:22.613 "raid_level": "raid1", 00:11:22.614 "superblock": true, 00:11:22.614 "num_base_bdevs": 2, 00:11:22.614 "num_base_bdevs_discovered": 1, 00:11:22.614 "num_base_bdevs_operational": 1, 00:11:22.614 "base_bdevs_list": [ 00:11:22.614 { 00:11:22.614 "name": null, 00:11:22.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.614 "is_configured": false, 00:11:22.614 "data_offset": 2048, 00:11:22.614 "data_size": 63488 00:11:22.614 }, 00:11:22.614 { 00:11:22.614 "name": "pt2", 00:11:22.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.614 "is_configured": true, 00:11:22.614 "data_offset": 2048, 00:11:22.614 "data_size": 63488 00:11:22.614 } 00:11:22.614 ] 00:11:22.614 }' 00:11:22.614 18:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:22.614 18:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.181 18:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:11:23.181 18:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:23.181 18:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:11:23.181 18:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:23.181 18:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:11:23.440 [2024-07-24 18:14:31.870961] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.440 18:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 30a78787-5d93-4844-bd6e-7f83d16b1d38 '!=' 30a78787-5d93-4844-bd6e-7f83d16b1d38 ']' 00:11:23.440 18:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2167032 00:11:23.440 18:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2167032 ']' 00:11:23.440 18:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2167032 00:11:23.440 18:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:23.440 18:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:23.440 18:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2167032 00:11:23.440 18:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:23.440 18:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:23.440 18:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2167032' 00:11:23.440 killing process with pid 2167032 00:11:23.440 18:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2167032 00:11:23.440 [2024-07-24 18:14:31.941498] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.440 [2024-07-24 18:14:31.941537] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.440 [2024-07-24 18:14:31.941567] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.440 [2024-07-24 18:14:31.941574] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bbd9f0 name raid_bdev1, state offline 00:11:23.440 18:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2167032 00:11:23.440 [2024-07-24 18:14:31.956371] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.701 18:14:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:23.701 00:11:23.701 real 0m11.662s 00:11:23.701 user 0m21.038s 00:11:23.701 sys 0m2.261s 00:11:23.701 18:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.701 18:14:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.701 ************************************ 00:11:23.701 END TEST raid_superblock_test 00:11:23.701 ************************************ 00:11:23.701 18:14:32 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:23.701 18:14:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:23.701 18:14:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.701 18:14:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.701 ************************************ 00:11:23.701 START TEST raid_read_error_test 00:11:23.701 ************************************ 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.churfZWcj4 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2169352 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2169352 /var/tmp/spdk-raid.sock 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2169352 ']' 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:23.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.701 18:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.701 [2024-07-24 18:14:32.284475] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:11:23.701 [2024-07-24 18:14:32.284522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169352 ] 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:01.0 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:01.1 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:01.2 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:01.3 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:01.4 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:01.5 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:01.6 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:01.7 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:02.0 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:02.1 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:02.2 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:02.3 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:02.4 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:02.5 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:02.6 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b3:02.7 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:01.0 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:01.1 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:01.2 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:01.3 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:01.4 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:01.5 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:01.6 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:01.7 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:02.0 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:02.1 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:02.2 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:02.3 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:02.4 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:02.5 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:02.6 cannot be used 00:11:23.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:23.961 EAL: Requested device 0000:b5:02.7 cannot be used 00:11:23.961 [2024-07-24 18:14:32.377396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.961 [2024-07-24 18:14:32.451566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.961 [2024-07-24 18:14:32.511189] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.961 [2024-07-24 18:14:32.511211] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.529 18:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:24.529 18:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:24.529 18:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:24.529 18:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:24.788 BaseBdev1_malloc 00:11:24.788 18:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:25.046 true 00:11:25.046 18:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:25.046 [2024-07-24 18:14:33.560017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:25.046 [2024-07-24 18:14:33.560050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.047 [2024-07-24 18:14:33.560062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x290bed0 00:11:25.047 [2024-07-24 18:14:33.560070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.047 [2024-07-24 18:14:33.561230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.047 [2024-07-24 18:14:33.561253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:25.047 BaseBdev1 00:11:25.047 18:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:25.047 18:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:25.305 BaseBdev2_malloc 00:11:25.305 18:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:25.606 true 00:11:25.606 18:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:25.606 [2024-07-24 18:14:34.056965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:25.606 [2024-07-24 18:14:34.056995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.606 [2024-07-24 18:14:34.057007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2910b60 00:11:25.606 [2024-07-24 18:14:34.057015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.606 [2024-07-24 18:14:34.057999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.606 [2024-07-24 18:14:34.058020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:25.606 BaseBdev2 00:11:25.606 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:25.883 [2024-07-24 18:14:34.225420] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.883 [2024-07-24 18:14:34.226269] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.883 [2024-07-24 18:14:34.226403] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2912790 00:11:25.883 [2024-07-24 18:14:34.226412] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:25.883 [2024-07-24 18:14:34.226538] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2766f00 00:11:25.883 [2024-07-24 18:14:34.226647] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2912790 00:11:25.883 [2024-07-24 18:14:34.226669] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2912790 00:11:25.883 [2024-07-24 18:14:34.226737] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:25.883 "name": "raid_bdev1", 00:11:25.883 "uuid": "15db03f2-4a90-4ad8-8e4b-4d424bedb595", 00:11:25.883 "strip_size_kb": 0, 00:11:25.883 "state": "online", 00:11:25.883 "raid_level": "raid1", 00:11:25.883 "superblock": true, 00:11:25.883 "num_base_bdevs": 2, 00:11:25.883 "num_base_bdevs_discovered": 2, 00:11:25.883 "num_base_bdevs_operational": 2, 00:11:25.883 "base_bdevs_list": [ 00:11:25.883 { 00:11:25.883 "name": "BaseBdev1", 00:11:25.883 "uuid": "4089dd7b-c812-5b33-aeab-3c46d593ea49", 00:11:25.883 "is_configured": true, 00:11:25.883 "data_offset": 2048, 00:11:25.883 "data_size": 63488 00:11:25.883 }, 00:11:25.883 { 00:11:25.883 "name": "BaseBdev2", 00:11:25.883 "uuid": "c9c201cc-792c-5701-b328-da9f36a820e7", 00:11:25.883 "is_configured": true, 00:11:25.883 "data_offset": 2048, 00:11:25.883 "data_size": 63488 00:11:25.883 } 00:11:25.883 ] 00:11:25.883 }' 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:25.883 18:14:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.451 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:26.451 18:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:26.451 [2024-07-24 18:14:34.963543] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x290d710 00:11:27.389 18:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:27.648 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:27.648 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:27.648 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:11:27.648 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:27.648 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:27.648 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:27.648 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:27.648 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:27.648 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:27.649 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:27.649 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:27.649 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:27.649 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:27.649 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:27.649 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.649 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.908 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:27.908 "name": "raid_bdev1", 00:11:27.908 "uuid": "15db03f2-4a90-4ad8-8e4b-4d424bedb595", 00:11:27.908 "strip_size_kb": 0, 00:11:27.908 "state": "online", 00:11:27.908 "raid_level": "raid1", 00:11:27.908 "superblock": true, 00:11:27.908 "num_base_bdevs": 2, 00:11:27.908 "num_base_bdevs_discovered": 2, 00:11:27.908 "num_base_bdevs_operational": 2, 00:11:27.908 "base_bdevs_list": [ 00:11:27.908 { 00:11:27.908 "name": "BaseBdev1", 00:11:27.908 "uuid": "4089dd7b-c812-5b33-aeab-3c46d593ea49", 00:11:27.908 "is_configured": true, 00:11:27.908 "data_offset": 2048, 00:11:27.908 "data_size": 63488 00:11:27.908 }, 00:11:27.908 { 00:11:27.908 "name": "BaseBdev2", 00:11:27.908 "uuid": "c9c201cc-792c-5701-b328-da9f36a820e7", 00:11:27.908 "is_configured": true, 00:11:27.908 "data_offset": 2048, 00:11:27.908 "data_size": 63488 00:11:27.908 } 00:11:27.908 ] 00:11:27.908 }' 00:11:27.908 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:27.908 18:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.167 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:28.427 [2024-07-24 18:14:36.895245] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.427 [2024-07-24 18:14:36.895275] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.427 [2024-07-24 18:14:36.897361] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.427 [2024-07-24 18:14:36.897384] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.427 [2024-07-24 18:14:36.897436] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.427 [2024-07-24 18:14:36.897444] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2912790 name raid_bdev1, state offline 00:11:28.427 0 00:11:28.427 18:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2169352 00:11:28.427 18:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2169352 ']' 00:11:28.427 18:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2169352 00:11:28.427 18:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:28.427 18:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:28.427 18:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2169352 00:11:28.427 18:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:28.428 18:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:28.428 18:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2169352' 00:11:28.428 killing process with pid 2169352 00:11:28.428 18:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2169352 00:11:28.428 [2024-07-24 18:14:36.970707] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.428 18:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2169352 00:11:28.428 [2024-07-24 18:14:36.980369] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:28.688 18:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.churfZWcj4 00:11:28.688 18:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:28.688 18:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:28.688 18:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:28.688 18:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:28.688 18:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:28.688 18:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:28.688 18:14:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:28.688 00:11:28.688 real 0m4.950s 00:11:28.688 user 0m7.427s 00:11:28.688 sys 0m0.882s 00:11:28.688 18:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.688 18:14:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.688 ************************************ 00:11:28.688 END TEST raid_read_error_test 00:11:28.688 ************************************ 00:11:28.688 18:14:37 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:28.688 18:14:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:28.688 18:14:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.688 18:14:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:28.688 ************************************ 00:11:28.688 START TEST raid_write_error_test 00:11:28.688 ************************************ 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.sTH4ePEdH6 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2170390 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2170390 /var/tmp/spdk-raid.sock 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2170390 ']' 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:28.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.688 18:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.949 [2024-07-24 18:14:37.316251] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:11:28.949 [2024-07-24 18:14:37.316300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170390 ] 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:01.0 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:01.1 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:01.2 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:01.3 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:01.4 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:01.5 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:01.6 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:01.7 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:02.0 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:02.1 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:02.2 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:02.3 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:02.4 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:02.5 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:02.6 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b3:02.7 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:01.0 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:01.1 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:01.2 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:01.3 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:01.4 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:01.5 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:01.6 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:01.7 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:02.0 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:02.1 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:02.2 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:02.3 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:02.4 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:02.5 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:02.6 cannot be used 00:11:28.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.949 EAL: Requested device 0000:b5:02.7 cannot be used 00:11:28.949 [2024-07-24 18:14:37.408873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.949 [2024-07-24 18:14:37.483135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.949 [2024-07-24 18:14:37.536283] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.949 [2024-07-24 18:14:37.536312] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.518 18:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:29.518 18:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:29.518 18:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:29.518 18:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:29.777 BaseBdev1_malloc 00:11:29.777 18:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:30.037 true 00:11:30.037 18:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:30.037 [2024-07-24 18:14:38.581055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:30.037 [2024-07-24 18:14:38.581084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.037 [2024-07-24 18:14:38.581097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d19ed0 00:11:30.037 [2024-07-24 18:14:38.581105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.037 [2024-07-24 18:14:38.582249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.037 [2024-07-24 18:14:38.582271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.037 BaseBdev1 00:11:30.037 18:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:30.037 18:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:30.296 BaseBdev2_malloc 00:11:30.296 18:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:30.555 true 00:11:30.555 18:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:30.555 [2024-07-24 18:14:39.077893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:30.555 [2024-07-24 18:14:39.077926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.555 [2024-07-24 18:14:39.077940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d1eb60 00:11:30.555 [2024-07-24 18:14:39.077965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.555 [2024-07-24 18:14:39.079017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.555 [2024-07-24 18:14:39.079047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:30.555 BaseBdev2 00:11:30.555 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:30.814 [2024-07-24 18:14:39.238329] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.814 [2024-07-24 18:14:39.239195] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.814 [2024-07-24 18:14:39.239328] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d20790 00:11:30.814 [2024-07-24 18:14:39.239337] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:30.814 [2024-07-24 18:14:39.239468] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b74f00 00:11:30.814 [2024-07-24 18:14:39.239574] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d20790 00:11:30.814 [2024-07-24 18:14:39.239581] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d20790 00:11:30.814 [2024-07-24 18:14:39.239656] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.814 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:30.815 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:30.815 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:30.815 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:30.815 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:30.815 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:30.815 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:30.815 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:30.815 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:30.815 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:30.815 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.815 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.074 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.074 "name": "raid_bdev1", 00:11:31.074 "uuid": "1dc344b7-5cd9-47df-af58-3c6aaec1d908", 00:11:31.074 "strip_size_kb": 0, 00:11:31.074 "state": "online", 00:11:31.074 "raid_level": "raid1", 00:11:31.074 "superblock": true, 00:11:31.074 "num_base_bdevs": 2, 00:11:31.074 "num_base_bdevs_discovered": 2, 00:11:31.074 "num_base_bdevs_operational": 2, 00:11:31.074 "base_bdevs_list": [ 00:11:31.074 { 00:11:31.074 "name": "BaseBdev1", 00:11:31.074 "uuid": "831ad0fd-bf50-54f3-8037-de7bc258b84f", 00:11:31.074 "is_configured": true, 00:11:31.074 "data_offset": 2048, 00:11:31.074 "data_size": 63488 00:11:31.074 }, 00:11:31.074 { 00:11:31.074 "name": "BaseBdev2", 00:11:31.074 "uuid": "22604540-6e1d-5d29-b667-a70103755080", 00:11:31.074 "is_configured": true, 00:11:31.074 "data_offset": 2048, 00:11:31.074 "data_size": 63488 00:11:31.074 } 00:11:31.074 ] 00:11:31.074 }' 00:11:31.074 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.074 18:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.333 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:31.333 18:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:31.592 [2024-07-24 18:14:40.004526] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d1b710 00:11:32.530 18:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:32.530 [2024-07-24 18:14:41.080732] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:32.530 [2024-07-24 18:14:41.080784] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.530 [2024-07-24 18:14:41.080957] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1d1b710 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.530 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.789 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:32.789 "name": "raid_bdev1", 00:11:32.789 "uuid": "1dc344b7-5cd9-47df-af58-3c6aaec1d908", 00:11:32.789 "strip_size_kb": 0, 00:11:32.789 "state": "online", 00:11:32.789 "raid_level": "raid1", 00:11:32.789 "superblock": true, 00:11:32.789 "num_base_bdevs": 2, 00:11:32.789 "num_base_bdevs_discovered": 1, 00:11:32.789 "num_base_bdevs_operational": 1, 00:11:32.789 "base_bdevs_list": [ 00:11:32.789 { 00:11:32.789 "name": null, 00:11:32.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.789 "is_configured": false, 00:11:32.789 "data_offset": 2048, 00:11:32.789 "data_size": 63488 00:11:32.789 }, 00:11:32.789 { 00:11:32.789 "name": "BaseBdev2", 00:11:32.789 "uuid": "22604540-6e1d-5d29-b667-a70103755080", 00:11:32.789 "is_configured": true, 00:11:32.789 "data_offset": 2048, 00:11:32.789 "data_size": 63488 00:11:32.789 } 00:11:32.789 ] 00:11:32.789 }' 00:11:32.789 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:32.789 18:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.358 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:33.358 [2024-07-24 18:14:41.929401] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.358 [2024-07-24 18:14:41.929429] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.358 [2024-07-24 18:14:41.931379] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.358 [2024-07-24 18:14:41.931397] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.358 [2024-07-24 18:14:41.931431] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.358 [2024-07-24 18:14:41.931439] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d20790 name raid_bdev1, state offline 00:11:33.358 0 00:11:33.358 18:14:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2170390 00:11:33.358 18:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2170390 ']' 00:11:33.358 18:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2170390 00:11:33.358 18:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:33.617 18:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:33.617 18:14:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2170390 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2170390' 00:11:33.617 killing process with pid 2170390 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2170390 00:11:33.617 [2024-07-24 18:14:42.004747] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2170390 00:11:33.617 [2024-07-24 18:14:42.013083] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.sTH4ePEdH6 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:33.617 00:11:33.617 real 0m4.950s 00:11:33.617 user 0m7.447s 00:11:33.617 sys 0m0.856s 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.617 18:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.617 ************************************ 00:11:33.617 END TEST raid_write_error_test 00:11:33.617 ************************************ 00:11:33.876 18:14:42 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:11:33.876 18:14:42 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:33.876 18:14:42 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:33.876 18:14:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:33.876 18:14:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.876 18:14:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:33.876 ************************************ 00:11:33.876 START TEST raid_state_function_test 00:11:33.876 ************************************ 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:33.876 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2171292 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2171292' 00:11:33.877 Process raid pid: 2171292 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2171292 /var/tmp/spdk-raid.sock 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2171292 ']' 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:33.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.877 18:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.877 [2024-07-24 18:14:42.347903] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:11:33.877 [2024-07-24 18:14:42.347949] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:01.0 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:01.1 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:01.2 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:01.3 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:01.4 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:01.5 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:01.6 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:01.7 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:02.0 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:02.1 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:02.2 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:02.3 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:02.4 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:02.5 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:02.6 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b3:02.7 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:01.0 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:01.1 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:01.2 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:01.3 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:01.4 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:01.5 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:01.6 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:01.7 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:02.0 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:02.1 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:02.2 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:02.3 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:02.4 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:02.5 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:02.6 cannot be used 00:11:33.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:33.877 EAL: Requested device 0000:b5:02.7 cannot be used 00:11:33.877 [2024-07-24 18:14:42.443410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.136 [2024-07-24 18:14:42.517438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.136 [2024-07-24 18:14:42.577230] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.136 [2024-07-24 18:14:42.577247] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.705 18:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.705 18:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:34.705 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:34.963 [2024-07-24 18:14:43.312184] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.963 [2024-07-24 18:14:43.312214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.963 [2024-07-24 18:14:43.312220] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.963 [2024-07-24 18:14:43.312227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.963 [2024-07-24 18:14:43.312233] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.963 [2024-07-24 18:14:43.312256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.963 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:34.963 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:34.963 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:34.963 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:34.963 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:34.964 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:34.964 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:34.964 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:34.964 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:34.964 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:34.964 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.964 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.964 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:34.964 "name": "Existed_Raid", 00:11:34.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.964 "strip_size_kb": 64, 00:11:34.964 "state": "configuring", 00:11:34.964 "raid_level": "raid0", 00:11:34.964 "superblock": false, 00:11:34.964 "num_base_bdevs": 3, 00:11:34.964 "num_base_bdevs_discovered": 0, 00:11:34.964 "num_base_bdevs_operational": 3, 00:11:34.964 "base_bdevs_list": [ 00:11:34.964 { 00:11:34.964 "name": "BaseBdev1", 00:11:34.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.964 "is_configured": false, 00:11:34.964 "data_offset": 0, 00:11:34.964 "data_size": 0 00:11:34.964 }, 00:11:34.964 { 00:11:34.964 "name": "BaseBdev2", 00:11:34.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.964 "is_configured": false, 00:11:34.964 "data_offset": 0, 00:11:34.964 "data_size": 0 00:11:34.964 }, 00:11:34.964 { 00:11:34.964 "name": "BaseBdev3", 00:11:34.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.964 "is_configured": false, 00:11:34.964 "data_offset": 0, 00:11:34.964 "data_size": 0 00:11:34.964 } 00:11:34.964 ] 00:11:34.964 }' 00:11:34.964 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:34.964 18:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.530 18:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:35.530 [2024-07-24 18:14:44.114181] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.530 [2024-07-24 18:14:44.114200] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14ab1c0 name Existed_Raid, state configuring 00:11:35.789 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:35.789 [2024-07-24 18:14:44.286632] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.789 [2024-07-24 18:14:44.286650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.789 [2024-07-24 18:14:44.286655] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.789 [2024-07-24 18:14:44.286662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.789 [2024-07-24 18:14:44.286667] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.789 [2024-07-24 18:14:44.286690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.789 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.048 [2024-07-24 18:14:44.463607] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.048 BaseBdev1 00:11:36.048 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:36.048 18:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:36.048 18:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:36.048 18:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:36.048 18:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:36.048 18:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:36.048 18:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:36.306 18:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.306 [ 00:11:36.306 { 00:11:36.306 "name": "BaseBdev1", 00:11:36.306 "aliases": [ 00:11:36.306 "fb3b9565-0228-4673-a950-49bc734b94b9" 00:11:36.306 ], 00:11:36.306 "product_name": "Malloc disk", 00:11:36.306 "block_size": 512, 00:11:36.306 "num_blocks": 65536, 00:11:36.307 "uuid": "fb3b9565-0228-4673-a950-49bc734b94b9", 00:11:36.307 "assigned_rate_limits": { 00:11:36.307 "rw_ios_per_sec": 0, 00:11:36.307 "rw_mbytes_per_sec": 0, 00:11:36.307 "r_mbytes_per_sec": 0, 00:11:36.307 "w_mbytes_per_sec": 0 00:11:36.307 }, 00:11:36.307 "claimed": true, 00:11:36.307 "claim_type": "exclusive_write", 00:11:36.307 "zoned": false, 00:11:36.307 "supported_io_types": { 00:11:36.307 "read": true, 00:11:36.307 "write": true, 00:11:36.307 "unmap": true, 00:11:36.307 "flush": true, 00:11:36.307 "reset": true, 00:11:36.307 "nvme_admin": false, 00:11:36.307 "nvme_io": false, 00:11:36.307 "nvme_io_md": false, 00:11:36.307 "write_zeroes": true, 00:11:36.307 "zcopy": true, 00:11:36.307 "get_zone_info": false, 00:11:36.307 "zone_management": false, 00:11:36.307 "zone_append": false, 00:11:36.307 "compare": false, 00:11:36.307 "compare_and_write": false, 00:11:36.307 "abort": true, 00:11:36.307 "seek_hole": false, 00:11:36.307 "seek_data": false, 00:11:36.307 "copy": true, 00:11:36.307 "nvme_iov_md": false 00:11:36.307 }, 00:11:36.307 "memory_domains": [ 00:11:36.307 { 00:11:36.307 "dma_device_id": "system", 00:11:36.307 "dma_device_type": 1 00:11:36.307 }, 00:11:36.307 { 00:11:36.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.307 "dma_device_type": 2 00:11:36.307 } 00:11:36.307 ], 00:11:36.307 "driver_specific": {} 00:11:36.307 } 00:11:36.307 ] 00:11:36.307 18:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:36.307 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:36.307 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:36.307 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:36.307 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:36.307 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:36.307 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:36.307 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:36.307 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:36.307 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:36.307 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:36.307 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.307 18:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.566 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:36.566 "name": "Existed_Raid", 00:11:36.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.566 "strip_size_kb": 64, 00:11:36.566 "state": "configuring", 00:11:36.566 "raid_level": "raid0", 00:11:36.566 "superblock": false, 00:11:36.566 "num_base_bdevs": 3, 00:11:36.566 "num_base_bdevs_discovered": 1, 00:11:36.566 "num_base_bdevs_operational": 3, 00:11:36.566 "base_bdevs_list": [ 00:11:36.566 { 00:11:36.566 "name": "BaseBdev1", 00:11:36.566 "uuid": "fb3b9565-0228-4673-a950-49bc734b94b9", 00:11:36.566 "is_configured": true, 00:11:36.566 "data_offset": 0, 00:11:36.566 "data_size": 65536 00:11:36.566 }, 00:11:36.566 { 00:11:36.566 "name": "BaseBdev2", 00:11:36.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.566 "is_configured": false, 00:11:36.566 "data_offset": 0, 00:11:36.566 "data_size": 0 00:11:36.566 }, 00:11:36.566 { 00:11:36.566 "name": "BaseBdev3", 00:11:36.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.566 "is_configured": false, 00:11:36.566 "data_offset": 0, 00:11:36.566 "data_size": 0 00:11:36.566 } 00:11:36.566 ] 00:11:36.566 }' 00:11:36.566 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:36.566 18:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.134 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:37.134 [2024-07-24 18:14:45.630607] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:37.134 [2024-07-24 18:14:45.630638] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14aaa90 name Existed_Raid, state configuring 00:11:37.134 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:37.393 [2024-07-24 18:14:45.799062] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.393 [2024-07-24 18:14:45.800115] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:37.393 [2024-07-24 18:14:45.800142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:37.393 [2024-07-24 18:14:45.800149] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:37.393 [2024-07-24 18:14:45.800157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.393 18:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.652 18:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:37.652 "name": "Existed_Raid", 00:11:37.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.652 "strip_size_kb": 64, 00:11:37.652 "state": "configuring", 00:11:37.652 "raid_level": "raid0", 00:11:37.652 "superblock": false, 00:11:37.652 "num_base_bdevs": 3, 00:11:37.652 "num_base_bdevs_discovered": 1, 00:11:37.652 "num_base_bdevs_operational": 3, 00:11:37.652 "base_bdevs_list": [ 00:11:37.652 { 00:11:37.652 "name": "BaseBdev1", 00:11:37.652 "uuid": "fb3b9565-0228-4673-a950-49bc734b94b9", 00:11:37.652 "is_configured": true, 00:11:37.652 "data_offset": 0, 00:11:37.652 "data_size": 65536 00:11:37.652 }, 00:11:37.652 { 00:11:37.652 "name": "BaseBdev2", 00:11:37.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.652 "is_configured": false, 00:11:37.652 "data_offset": 0, 00:11:37.652 "data_size": 0 00:11:37.652 }, 00:11:37.652 { 00:11:37.652 "name": "BaseBdev3", 00:11:37.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.652 "is_configured": false, 00:11:37.652 "data_offset": 0, 00:11:37.652 "data_size": 0 00:11:37.652 } 00:11:37.652 ] 00:11:37.652 }' 00:11:37.652 18:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:37.652 18:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.220 18:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:38.220 [2024-07-24 18:14:46.668066] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.220 BaseBdev2 00:11:38.220 18:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:38.220 18:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:38.220 18:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:38.220 18:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:38.220 18:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:38.220 18:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:38.220 18:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:38.479 18:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:38.479 [ 00:11:38.479 { 00:11:38.479 "name": "BaseBdev2", 00:11:38.479 "aliases": [ 00:11:38.479 "8c8467e9-752a-449b-89dd-978fe8bf8496" 00:11:38.479 ], 00:11:38.479 "product_name": "Malloc disk", 00:11:38.479 "block_size": 512, 00:11:38.479 "num_blocks": 65536, 00:11:38.479 "uuid": "8c8467e9-752a-449b-89dd-978fe8bf8496", 00:11:38.479 "assigned_rate_limits": { 00:11:38.479 "rw_ios_per_sec": 0, 00:11:38.479 "rw_mbytes_per_sec": 0, 00:11:38.479 "r_mbytes_per_sec": 0, 00:11:38.479 "w_mbytes_per_sec": 0 00:11:38.479 }, 00:11:38.479 "claimed": true, 00:11:38.479 "claim_type": "exclusive_write", 00:11:38.479 "zoned": false, 00:11:38.479 "supported_io_types": { 00:11:38.479 "read": true, 00:11:38.479 "write": true, 00:11:38.479 "unmap": true, 00:11:38.479 "flush": true, 00:11:38.479 "reset": true, 00:11:38.479 "nvme_admin": false, 00:11:38.479 "nvme_io": false, 00:11:38.479 "nvme_io_md": false, 00:11:38.479 "write_zeroes": true, 00:11:38.479 "zcopy": true, 00:11:38.479 "get_zone_info": false, 00:11:38.479 "zone_management": false, 00:11:38.479 "zone_append": false, 00:11:38.479 "compare": false, 00:11:38.479 "compare_and_write": false, 00:11:38.479 "abort": true, 00:11:38.479 "seek_hole": false, 00:11:38.479 "seek_data": false, 00:11:38.479 "copy": true, 00:11:38.479 "nvme_iov_md": false 00:11:38.479 }, 00:11:38.479 "memory_domains": [ 00:11:38.479 { 00:11:38.479 "dma_device_id": "system", 00:11:38.479 "dma_device_type": 1 00:11:38.479 }, 00:11:38.479 { 00:11:38.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.479 "dma_device_type": 2 00:11:38.479 } 00:11:38.479 ], 00:11:38.479 "driver_specific": {} 00:11:38.479 } 00:11:38.479 ] 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.479 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.738 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:38.738 "name": "Existed_Raid", 00:11:38.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.738 "strip_size_kb": 64, 00:11:38.738 "state": "configuring", 00:11:38.738 "raid_level": "raid0", 00:11:38.738 "superblock": false, 00:11:38.738 "num_base_bdevs": 3, 00:11:38.738 "num_base_bdevs_discovered": 2, 00:11:38.738 "num_base_bdevs_operational": 3, 00:11:38.738 "base_bdevs_list": [ 00:11:38.738 { 00:11:38.738 "name": "BaseBdev1", 00:11:38.738 "uuid": "fb3b9565-0228-4673-a950-49bc734b94b9", 00:11:38.738 "is_configured": true, 00:11:38.738 "data_offset": 0, 00:11:38.738 "data_size": 65536 00:11:38.738 }, 00:11:38.738 { 00:11:38.738 "name": "BaseBdev2", 00:11:38.738 "uuid": "8c8467e9-752a-449b-89dd-978fe8bf8496", 00:11:38.738 "is_configured": true, 00:11:38.738 "data_offset": 0, 00:11:38.738 "data_size": 65536 00:11:38.738 }, 00:11:38.738 { 00:11:38.738 "name": "BaseBdev3", 00:11:38.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.739 "is_configured": false, 00:11:38.739 "data_offset": 0, 00:11:38.739 "data_size": 0 00:11:38.739 } 00:11:38.739 ] 00:11:38.739 }' 00:11:38.739 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:38.739 18:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.306 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:39.306 [2024-07-24 18:14:47.870000] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.306 [2024-07-24 18:14:47.870026] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x14ab980 00:11:39.306 [2024-07-24 18:14:47.870031] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:39.306 [2024-07-24 18:14:47.870152] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14ab650 00:11:39.306 [2024-07-24 18:14:47.870230] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14ab980 00:11:39.306 [2024-07-24 18:14:47.870237] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x14ab980 00:11:39.306 [2024-07-24 18:14:47.870352] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.306 BaseBdev3 00:11:39.306 18:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:39.306 18:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:39.306 18:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:39.306 18:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:39.306 18:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:39.306 18:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:39.306 18:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:39.566 18:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:39.825 [ 00:11:39.825 { 00:11:39.825 "name": "BaseBdev3", 00:11:39.825 "aliases": [ 00:11:39.825 "bceaeaf5-1698-4432-bec6-71aa9d2b0453" 00:11:39.825 ], 00:11:39.825 "product_name": "Malloc disk", 00:11:39.825 "block_size": 512, 00:11:39.825 "num_blocks": 65536, 00:11:39.825 "uuid": "bceaeaf5-1698-4432-bec6-71aa9d2b0453", 00:11:39.825 "assigned_rate_limits": { 00:11:39.825 "rw_ios_per_sec": 0, 00:11:39.825 "rw_mbytes_per_sec": 0, 00:11:39.825 "r_mbytes_per_sec": 0, 00:11:39.825 "w_mbytes_per_sec": 0 00:11:39.825 }, 00:11:39.825 "claimed": true, 00:11:39.825 "claim_type": "exclusive_write", 00:11:39.825 "zoned": false, 00:11:39.825 "supported_io_types": { 00:11:39.825 "read": true, 00:11:39.825 "write": true, 00:11:39.825 "unmap": true, 00:11:39.825 "flush": true, 00:11:39.825 "reset": true, 00:11:39.825 "nvme_admin": false, 00:11:39.825 "nvme_io": false, 00:11:39.825 "nvme_io_md": false, 00:11:39.825 "write_zeroes": true, 00:11:39.825 "zcopy": true, 00:11:39.825 "get_zone_info": false, 00:11:39.825 "zone_management": false, 00:11:39.825 "zone_append": false, 00:11:39.825 "compare": false, 00:11:39.825 "compare_and_write": false, 00:11:39.825 "abort": true, 00:11:39.825 "seek_hole": false, 00:11:39.825 "seek_data": false, 00:11:39.825 "copy": true, 00:11:39.825 "nvme_iov_md": false 00:11:39.825 }, 00:11:39.825 "memory_domains": [ 00:11:39.825 { 00:11:39.825 "dma_device_id": "system", 00:11:39.825 "dma_device_type": 1 00:11:39.825 }, 00:11:39.825 { 00:11:39.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.825 "dma_device_type": 2 00:11:39.825 } 00:11:39.825 ], 00:11:39.825 "driver_specific": {} 00:11:39.825 } 00:11:39.825 ] 00:11:39.825 18:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:39.825 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:39.825 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:39.825 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:39.825 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:39.825 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:39.825 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:39.825 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:39.825 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:39.825 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.825 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.826 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.826 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.826 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.826 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.826 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.826 "name": "Existed_Raid", 00:11:39.826 "uuid": "fc7480d9-619c-416c-b007-02ebd2cf80b7", 00:11:39.826 "strip_size_kb": 64, 00:11:39.826 "state": "online", 00:11:39.826 "raid_level": "raid0", 00:11:39.826 "superblock": false, 00:11:39.826 "num_base_bdevs": 3, 00:11:39.826 "num_base_bdevs_discovered": 3, 00:11:39.826 "num_base_bdevs_operational": 3, 00:11:39.826 "base_bdevs_list": [ 00:11:39.826 { 00:11:39.826 "name": "BaseBdev1", 00:11:39.826 "uuid": "fb3b9565-0228-4673-a950-49bc734b94b9", 00:11:39.826 "is_configured": true, 00:11:39.826 "data_offset": 0, 00:11:39.826 "data_size": 65536 00:11:39.826 }, 00:11:39.826 { 00:11:39.826 "name": "BaseBdev2", 00:11:39.826 "uuid": "8c8467e9-752a-449b-89dd-978fe8bf8496", 00:11:39.826 "is_configured": true, 00:11:39.826 "data_offset": 0, 00:11:39.826 "data_size": 65536 00:11:39.826 }, 00:11:39.826 { 00:11:39.826 "name": "BaseBdev3", 00:11:39.826 "uuid": "bceaeaf5-1698-4432-bec6-71aa9d2b0453", 00:11:39.826 "is_configured": true, 00:11:39.826 "data_offset": 0, 00:11:39.826 "data_size": 65536 00:11:39.826 } 00:11:39.826 ] 00:11:39.826 }' 00:11:39.826 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.826 18:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.433 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:40.433 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:40.433 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:40.433 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:40.433 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:40.433 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:40.433 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:40.433 18:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:40.433 [2024-07-24 18:14:49.025194] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.693 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:40.693 "name": "Existed_Raid", 00:11:40.693 "aliases": [ 00:11:40.693 "fc7480d9-619c-416c-b007-02ebd2cf80b7" 00:11:40.693 ], 00:11:40.693 "product_name": "Raid Volume", 00:11:40.693 "block_size": 512, 00:11:40.693 "num_blocks": 196608, 00:11:40.693 "uuid": "fc7480d9-619c-416c-b007-02ebd2cf80b7", 00:11:40.693 "assigned_rate_limits": { 00:11:40.693 "rw_ios_per_sec": 0, 00:11:40.693 "rw_mbytes_per_sec": 0, 00:11:40.693 "r_mbytes_per_sec": 0, 00:11:40.693 "w_mbytes_per_sec": 0 00:11:40.693 }, 00:11:40.693 "claimed": false, 00:11:40.693 "zoned": false, 00:11:40.693 "supported_io_types": { 00:11:40.693 "read": true, 00:11:40.693 "write": true, 00:11:40.693 "unmap": true, 00:11:40.693 "flush": true, 00:11:40.693 "reset": true, 00:11:40.693 "nvme_admin": false, 00:11:40.693 "nvme_io": false, 00:11:40.693 "nvme_io_md": false, 00:11:40.693 "write_zeroes": true, 00:11:40.693 "zcopy": false, 00:11:40.693 "get_zone_info": false, 00:11:40.693 "zone_management": false, 00:11:40.693 "zone_append": false, 00:11:40.693 "compare": false, 00:11:40.693 "compare_and_write": false, 00:11:40.693 "abort": false, 00:11:40.693 "seek_hole": false, 00:11:40.693 "seek_data": false, 00:11:40.693 "copy": false, 00:11:40.693 "nvme_iov_md": false 00:11:40.693 }, 00:11:40.693 "memory_domains": [ 00:11:40.693 { 00:11:40.693 "dma_device_id": "system", 00:11:40.693 "dma_device_type": 1 00:11:40.693 }, 00:11:40.693 { 00:11:40.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.693 "dma_device_type": 2 00:11:40.693 }, 00:11:40.693 { 00:11:40.693 "dma_device_id": "system", 00:11:40.693 "dma_device_type": 1 00:11:40.693 }, 00:11:40.693 { 00:11:40.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.693 "dma_device_type": 2 00:11:40.693 }, 00:11:40.693 { 00:11:40.693 "dma_device_id": "system", 00:11:40.693 "dma_device_type": 1 00:11:40.693 }, 00:11:40.693 { 00:11:40.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.693 "dma_device_type": 2 00:11:40.693 } 00:11:40.693 ], 00:11:40.693 "driver_specific": { 00:11:40.693 "raid": { 00:11:40.693 "uuid": "fc7480d9-619c-416c-b007-02ebd2cf80b7", 00:11:40.693 "strip_size_kb": 64, 00:11:40.693 "state": "online", 00:11:40.693 "raid_level": "raid0", 00:11:40.693 "superblock": false, 00:11:40.693 "num_base_bdevs": 3, 00:11:40.693 "num_base_bdevs_discovered": 3, 00:11:40.693 "num_base_bdevs_operational": 3, 00:11:40.693 "base_bdevs_list": [ 00:11:40.693 { 00:11:40.693 "name": "BaseBdev1", 00:11:40.693 "uuid": "fb3b9565-0228-4673-a950-49bc734b94b9", 00:11:40.693 "is_configured": true, 00:11:40.693 "data_offset": 0, 00:11:40.693 "data_size": 65536 00:11:40.693 }, 00:11:40.693 { 00:11:40.693 "name": "BaseBdev2", 00:11:40.693 "uuid": "8c8467e9-752a-449b-89dd-978fe8bf8496", 00:11:40.693 "is_configured": true, 00:11:40.693 "data_offset": 0, 00:11:40.693 "data_size": 65536 00:11:40.693 }, 00:11:40.693 { 00:11:40.693 "name": "BaseBdev3", 00:11:40.693 "uuid": "bceaeaf5-1698-4432-bec6-71aa9d2b0453", 00:11:40.693 "is_configured": true, 00:11:40.693 "data_offset": 0, 00:11:40.693 "data_size": 65536 00:11:40.693 } 00:11:40.693 ] 00:11:40.693 } 00:11:40.693 } 00:11:40.693 }' 00:11:40.693 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:40.693 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:40.693 BaseBdev2 00:11:40.693 BaseBdev3' 00:11:40.693 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:40.693 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:40.693 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:40.693 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:40.693 "name": "BaseBdev1", 00:11:40.693 "aliases": [ 00:11:40.693 "fb3b9565-0228-4673-a950-49bc734b94b9" 00:11:40.693 ], 00:11:40.693 "product_name": "Malloc disk", 00:11:40.693 "block_size": 512, 00:11:40.693 "num_blocks": 65536, 00:11:40.693 "uuid": "fb3b9565-0228-4673-a950-49bc734b94b9", 00:11:40.693 "assigned_rate_limits": { 00:11:40.693 "rw_ios_per_sec": 0, 00:11:40.693 "rw_mbytes_per_sec": 0, 00:11:40.693 "r_mbytes_per_sec": 0, 00:11:40.693 "w_mbytes_per_sec": 0 00:11:40.693 }, 00:11:40.693 "claimed": true, 00:11:40.693 "claim_type": "exclusive_write", 00:11:40.693 "zoned": false, 00:11:40.693 "supported_io_types": { 00:11:40.693 "read": true, 00:11:40.693 "write": true, 00:11:40.693 "unmap": true, 00:11:40.693 "flush": true, 00:11:40.693 "reset": true, 00:11:40.693 "nvme_admin": false, 00:11:40.693 "nvme_io": false, 00:11:40.693 "nvme_io_md": false, 00:11:40.693 "write_zeroes": true, 00:11:40.693 "zcopy": true, 00:11:40.693 "get_zone_info": false, 00:11:40.693 "zone_management": false, 00:11:40.693 "zone_append": false, 00:11:40.693 "compare": false, 00:11:40.693 "compare_and_write": false, 00:11:40.693 "abort": true, 00:11:40.693 "seek_hole": false, 00:11:40.693 "seek_data": false, 00:11:40.693 "copy": true, 00:11:40.693 "nvme_iov_md": false 00:11:40.693 }, 00:11:40.693 "memory_domains": [ 00:11:40.693 { 00:11:40.693 "dma_device_id": "system", 00:11:40.693 "dma_device_type": 1 00:11:40.693 }, 00:11:40.693 { 00:11:40.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.693 "dma_device_type": 2 00:11:40.693 } 00:11:40.693 ], 00:11:40.693 "driver_specific": {} 00:11:40.693 }' 00:11:40.693 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.952 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.952 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:40.952 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.952 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.952 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:40.952 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:40.952 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:40.952 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:40.952 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:40.952 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:41.211 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:41.211 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:41.211 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:41.212 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:41.212 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:41.212 "name": "BaseBdev2", 00:11:41.212 "aliases": [ 00:11:41.212 "8c8467e9-752a-449b-89dd-978fe8bf8496" 00:11:41.212 ], 00:11:41.212 "product_name": "Malloc disk", 00:11:41.212 "block_size": 512, 00:11:41.212 "num_blocks": 65536, 00:11:41.212 "uuid": "8c8467e9-752a-449b-89dd-978fe8bf8496", 00:11:41.212 "assigned_rate_limits": { 00:11:41.212 "rw_ios_per_sec": 0, 00:11:41.212 "rw_mbytes_per_sec": 0, 00:11:41.212 "r_mbytes_per_sec": 0, 00:11:41.212 "w_mbytes_per_sec": 0 00:11:41.212 }, 00:11:41.212 "claimed": true, 00:11:41.212 "claim_type": "exclusive_write", 00:11:41.212 "zoned": false, 00:11:41.212 "supported_io_types": { 00:11:41.212 "read": true, 00:11:41.212 "write": true, 00:11:41.212 "unmap": true, 00:11:41.212 "flush": true, 00:11:41.212 "reset": true, 00:11:41.212 "nvme_admin": false, 00:11:41.212 "nvme_io": false, 00:11:41.212 "nvme_io_md": false, 00:11:41.212 "write_zeroes": true, 00:11:41.212 "zcopy": true, 00:11:41.212 "get_zone_info": false, 00:11:41.212 "zone_management": false, 00:11:41.212 "zone_append": false, 00:11:41.212 "compare": false, 00:11:41.212 "compare_and_write": false, 00:11:41.212 "abort": true, 00:11:41.212 "seek_hole": false, 00:11:41.212 "seek_data": false, 00:11:41.212 "copy": true, 00:11:41.212 "nvme_iov_md": false 00:11:41.212 }, 00:11:41.212 "memory_domains": [ 00:11:41.212 { 00:11:41.212 "dma_device_id": "system", 00:11:41.212 "dma_device_type": 1 00:11:41.212 }, 00:11:41.212 { 00:11:41.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.212 "dma_device_type": 2 00:11:41.212 } 00:11:41.212 ], 00:11:41.212 "driver_specific": {} 00:11:41.212 }' 00:11:41.212 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:41.212 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:41.471 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:41.471 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:41.471 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:41.471 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:41.471 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:41.471 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:41.471 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:41.471 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:41.471 18:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:41.471 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:41.471 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:41.471 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:41.471 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:41.731 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:41.731 "name": "BaseBdev3", 00:11:41.731 "aliases": [ 00:11:41.731 "bceaeaf5-1698-4432-bec6-71aa9d2b0453" 00:11:41.731 ], 00:11:41.731 "product_name": "Malloc disk", 00:11:41.731 "block_size": 512, 00:11:41.731 "num_blocks": 65536, 00:11:41.731 "uuid": "bceaeaf5-1698-4432-bec6-71aa9d2b0453", 00:11:41.731 "assigned_rate_limits": { 00:11:41.731 "rw_ios_per_sec": 0, 00:11:41.731 "rw_mbytes_per_sec": 0, 00:11:41.731 "r_mbytes_per_sec": 0, 00:11:41.731 "w_mbytes_per_sec": 0 00:11:41.731 }, 00:11:41.731 "claimed": true, 00:11:41.731 "claim_type": "exclusive_write", 00:11:41.731 "zoned": false, 00:11:41.731 "supported_io_types": { 00:11:41.731 "read": true, 00:11:41.731 "write": true, 00:11:41.731 "unmap": true, 00:11:41.731 "flush": true, 00:11:41.731 "reset": true, 00:11:41.731 "nvme_admin": false, 00:11:41.731 "nvme_io": false, 00:11:41.731 "nvme_io_md": false, 00:11:41.731 "write_zeroes": true, 00:11:41.731 "zcopy": true, 00:11:41.731 "get_zone_info": false, 00:11:41.731 "zone_management": false, 00:11:41.731 "zone_append": false, 00:11:41.731 "compare": false, 00:11:41.731 "compare_and_write": false, 00:11:41.731 "abort": true, 00:11:41.731 "seek_hole": false, 00:11:41.731 "seek_data": false, 00:11:41.731 "copy": true, 00:11:41.731 "nvme_iov_md": false 00:11:41.731 }, 00:11:41.731 "memory_domains": [ 00:11:41.731 { 00:11:41.731 "dma_device_id": "system", 00:11:41.731 "dma_device_type": 1 00:11:41.731 }, 00:11:41.732 { 00:11:41.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.732 "dma_device_type": 2 00:11:41.732 } 00:11:41.732 ], 00:11:41.732 "driver_specific": {} 00:11:41.732 }' 00:11:41.732 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:41.732 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:41.732 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:41.732 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:41.732 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:41.732 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:41.732 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:41.991 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:41.991 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:41.991 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:41.991 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:41.991 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:41.991 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:41.991 [2024-07-24 18:14:50.585069] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:41.991 [2024-07-24 18:14:50.585089] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.991 [2024-07-24 18:14:50.585117] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.250 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:42.250 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:42.250 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:42.250 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:42.250 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:42.250 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:42.251 "name": "Existed_Raid", 00:11:42.251 "uuid": "fc7480d9-619c-416c-b007-02ebd2cf80b7", 00:11:42.251 "strip_size_kb": 64, 00:11:42.251 "state": "offline", 00:11:42.251 "raid_level": "raid0", 00:11:42.251 "superblock": false, 00:11:42.251 "num_base_bdevs": 3, 00:11:42.251 "num_base_bdevs_discovered": 2, 00:11:42.251 "num_base_bdevs_operational": 2, 00:11:42.251 "base_bdevs_list": [ 00:11:42.251 { 00:11:42.251 "name": null, 00:11:42.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.251 "is_configured": false, 00:11:42.251 "data_offset": 0, 00:11:42.251 "data_size": 65536 00:11:42.251 }, 00:11:42.251 { 00:11:42.251 "name": "BaseBdev2", 00:11:42.251 "uuid": "8c8467e9-752a-449b-89dd-978fe8bf8496", 00:11:42.251 "is_configured": true, 00:11:42.251 "data_offset": 0, 00:11:42.251 "data_size": 65536 00:11:42.251 }, 00:11:42.251 { 00:11:42.251 "name": "BaseBdev3", 00:11:42.251 "uuid": "bceaeaf5-1698-4432-bec6-71aa9d2b0453", 00:11:42.251 "is_configured": true, 00:11:42.251 "data_offset": 0, 00:11:42.251 "data_size": 65536 00:11:42.251 } 00:11:42.251 ] 00:11:42.251 }' 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:42.251 18:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.819 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:42.819 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:42.819 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.819 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:43.078 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:43.078 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:43.078 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:43.078 [2024-07-24 18:14:51.572424] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:43.078 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:43.078 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:43.078 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.078 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:43.338 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:43.338 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:43.338 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:43.338 [2024-07-24 18:14:51.911187] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:43.338 [2024-07-24 18:14:51.911217] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14ab980 name Existed_Raid, state offline 00:11:43.597 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:43.597 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:43.597 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.597 18:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:43.597 18:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:43.597 18:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:43.597 18:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:43.597 18:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:43.597 18:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:43.597 18:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:43.856 BaseBdev2 00:11:43.856 18:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:43.856 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:43.856 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:43.856 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:43.856 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:43.856 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:43.856 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:43.856 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:44.116 [ 00:11:44.116 { 00:11:44.116 "name": "BaseBdev2", 00:11:44.116 "aliases": [ 00:11:44.116 "b93e2a87-6e5c-41dd-9eae-bf29a170e0c6" 00:11:44.116 ], 00:11:44.116 "product_name": "Malloc disk", 00:11:44.116 "block_size": 512, 00:11:44.116 "num_blocks": 65536, 00:11:44.116 "uuid": "b93e2a87-6e5c-41dd-9eae-bf29a170e0c6", 00:11:44.116 "assigned_rate_limits": { 00:11:44.116 "rw_ios_per_sec": 0, 00:11:44.116 "rw_mbytes_per_sec": 0, 00:11:44.116 "r_mbytes_per_sec": 0, 00:11:44.116 "w_mbytes_per_sec": 0 00:11:44.116 }, 00:11:44.116 "claimed": false, 00:11:44.116 "zoned": false, 00:11:44.116 "supported_io_types": { 00:11:44.116 "read": true, 00:11:44.116 "write": true, 00:11:44.116 "unmap": true, 00:11:44.116 "flush": true, 00:11:44.116 "reset": true, 00:11:44.116 "nvme_admin": false, 00:11:44.116 "nvme_io": false, 00:11:44.116 "nvme_io_md": false, 00:11:44.116 "write_zeroes": true, 00:11:44.116 "zcopy": true, 00:11:44.116 "get_zone_info": false, 00:11:44.116 "zone_management": false, 00:11:44.116 "zone_append": false, 00:11:44.116 "compare": false, 00:11:44.116 "compare_and_write": false, 00:11:44.116 "abort": true, 00:11:44.116 "seek_hole": false, 00:11:44.116 "seek_data": false, 00:11:44.116 "copy": true, 00:11:44.116 "nvme_iov_md": false 00:11:44.116 }, 00:11:44.116 "memory_domains": [ 00:11:44.116 { 00:11:44.116 "dma_device_id": "system", 00:11:44.116 "dma_device_type": 1 00:11:44.116 }, 00:11:44.116 { 00:11:44.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.116 "dma_device_type": 2 00:11:44.116 } 00:11:44.116 ], 00:11:44.116 "driver_specific": {} 00:11:44.116 } 00:11:44.116 ] 00:11:44.116 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:44.116 18:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:44.116 18:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:44.116 18:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:44.376 BaseBdev3 00:11:44.376 18:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:44.376 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:44.376 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:44.376 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:44.376 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:44.376 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:44.376 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:44.376 18:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:44.635 [ 00:11:44.635 { 00:11:44.635 "name": "BaseBdev3", 00:11:44.635 "aliases": [ 00:11:44.635 "2537cbb0-f47d-45f0-9905-b5bf8418bffa" 00:11:44.635 ], 00:11:44.635 "product_name": "Malloc disk", 00:11:44.635 "block_size": 512, 00:11:44.635 "num_blocks": 65536, 00:11:44.635 "uuid": "2537cbb0-f47d-45f0-9905-b5bf8418bffa", 00:11:44.635 "assigned_rate_limits": { 00:11:44.635 "rw_ios_per_sec": 0, 00:11:44.635 "rw_mbytes_per_sec": 0, 00:11:44.635 "r_mbytes_per_sec": 0, 00:11:44.635 "w_mbytes_per_sec": 0 00:11:44.635 }, 00:11:44.635 "claimed": false, 00:11:44.635 "zoned": false, 00:11:44.635 "supported_io_types": { 00:11:44.635 "read": true, 00:11:44.635 "write": true, 00:11:44.635 "unmap": true, 00:11:44.635 "flush": true, 00:11:44.635 "reset": true, 00:11:44.635 "nvme_admin": false, 00:11:44.635 "nvme_io": false, 00:11:44.635 "nvme_io_md": false, 00:11:44.635 "write_zeroes": true, 00:11:44.635 "zcopy": true, 00:11:44.635 "get_zone_info": false, 00:11:44.635 "zone_management": false, 00:11:44.635 "zone_append": false, 00:11:44.635 "compare": false, 00:11:44.635 "compare_and_write": false, 00:11:44.635 "abort": true, 00:11:44.635 "seek_hole": false, 00:11:44.635 "seek_data": false, 00:11:44.635 "copy": true, 00:11:44.635 "nvme_iov_md": false 00:11:44.635 }, 00:11:44.635 "memory_domains": [ 00:11:44.635 { 00:11:44.635 "dma_device_id": "system", 00:11:44.635 "dma_device_type": 1 00:11:44.635 }, 00:11:44.636 { 00:11:44.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.636 "dma_device_type": 2 00:11:44.636 } 00:11:44.636 ], 00:11:44.636 "driver_specific": {} 00:11:44.636 } 00:11:44.636 ] 00:11:44.636 18:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:44.636 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:44.636 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:44.636 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:44.903 [2024-07-24 18:14:53.264246] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:44.903 [2024-07-24 18:14:53.264277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:44.903 [2024-07-24 18:14:53.264293] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.903 [2024-07-24 18:14:53.265252] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.903 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:44.903 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:44.903 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:44.903 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:44.903 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:44.903 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:44.903 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:44.903 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:44.904 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:44.904 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:44.904 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.904 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.904 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:44.904 "name": "Existed_Raid", 00:11:44.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.904 "strip_size_kb": 64, 00:11:44.904 "state": "configuring", 00:11:44.904 "raid_level": "raid0", 00:11:44.904 "superblock": false, 00:11:44.904 "num_base_bdevs": 3, 00:11:44.904 "num_base_bdevs_discovered": 2, 00:11:44.904 "num_base_bdevs_operational": 3, 00:11:44.904 "base_bdevs_list": [ 00:11:44.904 { 00:11:44.904 "name": "BaseBdev1", 00:11:44.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.904 "is_configured": false, 00:11:44.904 "data_offset": 0, 00:11:44.904 "data_size": 0 00:11:44.904 }, 00:11:44.904 { 00:11:44.904 "name": "BaseBdev2", 00:11:44.904 "uuid": "b93e2a87-6e5c-41dd-9eae-bf29a170e0c6", 00:11:44.904 "is_configured": true, 00:11:44.904 "data_offset": 0, 00:11:44.904 "data_size": 65536 00:11:44.904 }, 00:11:44.904 { 00:11:44.904 "name": "BaseBdev3", 00:11:44.904 "uuid": "2537cbb0-f47d-45f0-9905-b5bf8418bffa", 00:11:44.904 "is_configured": true, 00:11:44.904 "data_offset": 0, 00:11:44.904 "data_size": 65536 00:11:44.904 } 00:11:44.904 ] 00:11:44.904 }' 00:11:44.904 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:44.904 18:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.484 18:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:45.484 [2024-07-24 18:14:54.054288] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:45.484 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:45.484 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:45.484 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:45.484 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:45.484 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:45.484 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:45.484 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:45.484 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:45.484 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:45.484 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:45.484 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.484 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.744 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:45.744 "name": "Existed_Raid", 00:11:45.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.744 "strip_size_kb": 64, 00:11:45.744 "state": "configuring", 00:11:45.744 "raid_level": "raid0", 00:11:45.744 "superblock": false, 00:11:45.744 "num_base_bdevs": 3, 00:11:45.744 "num_base_bdevs_discovered": 1, 00:11:45.744 "num_base_bdevs_operational": 3, 00:11:45.744 "base_bdevs_list": [ 00:11:45.744 { 00:11:45.744 "name": "BaseBdev1", 00:11:45.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.744 "is_configured": false, 00:11:45.744 "data_offset": 0, 00:11:45.744 "data_size": 0 00:11:45.744 }, 00:11:45.744 { 00:11:45.744 "name": null, 00:11:45.744 "uuid": "b93e2a87-6e5c-41dd-9eae-bf29a170e0c6", 00:11:45.744 "is_configured": false, 00:11:45.744 "data_offset": 0, 00:11:45.744 "data_size": 65536 00:11:45.744 }, 00:11:45.744 { 00:11:45.744 "name": "BaseBdev3", 00:11:45.744 "uuid": "2537cbb0-f47d-45f0-9905-b5bf8418bffa", 00:11:45.744 "is_configured": true, 00:11:45.744 "data_offset": 0, 00:11:45.744 "data_size": 65536 00:11:45.744 } 00:11:45.744 ] 00:11:45.744 }' 00:11:45.744 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:45.744 18:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.312 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.312 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:46.313 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:46.313 18:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:46.571 [2024-07-24 18:14:55.027598] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.571 BaseBdev1 00:11:46.571 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:46.571 18:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:46.571 18:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:46.571 18:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:46.572 18:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:46.572 18:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:46.572 18:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:46.831 [ 00:11:46.831 { 00:11:46.831 "name": "BaseBdev1", 00:11:46.831 "aliases": [ 00:11:46.831 "b4a13231-7fd0-440a-b6f6-962980252391" 00:11:46.831 ], 00:11:46.831 "product_name": "Malloc disk", 00:11:46.831 "block_size": 512, 00:11:46.831 "num_blocks": 65536, 00:11:46.831 "uuid": "b4a13231-7fd0-440a-b6f6-962980252391", 00:11:46.831 "assigned_rate_limits": { 00:11:46.831 "rw_ios_per_sec": 0, 00:11:46.831 "rw_mbytes_per_sec": 0, 00:11:46.831 "r_mbytes_per_sec": 0, 00:11:46.831 "w_mbytes_per_sec": 0 00:11:46.831 }, 00:11:46.831 "claimed": true, 00:11:46.831 "claim_type": "exclusive_write", 00:11:46.831 "zoned": false, 00:11:46.831 "supported_io_types": { 00:11:46.831 "read": true, 00:11:46.831 "write": true, 00:11:46.831 "unmap": true, 00:11:46.831 "flush": true, 00:11:46.831 "reset": true, 00:11:46.831 "nvme_admin": false, 00:11:46.831 "nvme_io": false, 00:11:46.831 "nvme_io_md": false, 00:11:46.831 "write_zeroes": true, 00:11:46.831 "zcopy": true, 00:11:46.831 "get_zone_info": false, 00:11:46.831 "zone_management": false, 00:11:46.831 "zone_append": false, 00:11:46.831 "compare": false, 00:11:46.831 "compare_and_write": false, 00:11:46.831 "abort": true, 00:11:46.831 "seek_hole": false, 00:11:46.831 "seek_data": false, 00:11:46.831 "copy": true, 00:11:46.831 "nvme_iov_md": false 00:11:46.831 }, 00:11:46.831 "memory_domains": [ 00:11:46.831 { 00:11:46.831 "dma_device_id": "system", 00:11:46.831 "dma_device_type": 1 00:11:46.831 }, 00:11:46.831 { 00:11:46.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.831 "dma_device_type": 2 00:11:46.831 } 00:11:46.831 ], 00:11:46.831 "driver_specific": {} 00:11:46.831 } 00:11:46.831 ] 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.831 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.090 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:47.090 "name": "Existed_Raid", 00:11:47.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.090 "strip_size_kb": 64, 00:11:47.090 "state": "configuring", 00:11:47.090 "raid_level": "raid0", 00:11:47.090 "superblock": false, 00:11:47.090 "num_base_bdevs": 3, 00:11:47.090 "num_base_bdevs_discovered": 2, 00:11:47.090 "num_base_bdevs_operational": 3, 00:11:47.090 "base_bdevs_list": [ 00:11:47.090 { 00:11:47.090 "name": "BaseBdev1", 00:11:47.090 "uuid": "b4a13231-7fd0-440a-b6f6-962980252391", 00:11:47.090 "is_configured": true, 00:11:47.090 "data_offset": 0, 00:11:47.090 "data_size": 65536 00:11:47.090 }, 00:11:47.090 { 00:11:47.090 "name": null, 00:11:47.090 "uuid": "b93e2a87-6e5c-41dd-9eae-bf29a170e0c6", 00:11:47.090 "is_configured": false, 00:11:47.090 "data_offset": 0, 00:11:47.090 "data_size": 65536 00:11:47.090 }, 00:11:47.090 { 00:11:47.090 "name": "BaseBdev3", 00:11:47.090 "uuid": "2537cbb0-f47d-45f0-9905-b5bf8418bffa", 00:11:47.090 "is_configured": true, 00:11:47.090 "data_offset": 0, 00:11:47.090 "data_size": 65536 00:11:47.090 } 00:11:47.090 ] 00:11:47.090 }' 00:11:47.090 18:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:47.090 18:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.659 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.659 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:47.659 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:47.659 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:47.918 [2024-07-24 18:14:56.347024] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:47.918 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:47.918 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:47.918 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:47.918 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:47.918 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:47.918 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:47.918 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:47.918 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:47.918 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:47.918 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:47.918 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.918 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.178 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:48.178 "name": "Existed_Raid", 00:11:48.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.178 "strip_size_kb": 64, 00:11:48.178 "state": "configuring", 00:11:48.178 "raid_level": "raid0", 00:11:48.178 "superblock": false, 00:11:48.178 "num_base_bdevs": 3, 00:11:48.178 "num_base_bdevs_discovered": 1, 00:11:48.178 "num_base_bdevs_operational": 3, 00:11:48.178 "base_bdevs_list": [ 00:11:48.178 { 00:11:48.178 "name": "BaseBdev1", 00:11:48.178 "uuid": "b4a13231-7fd0-440a-b6f6-962980252391", 00:11:48.178 "is_configured": true, 00:11:48.178 "data_offset": 0, 00:11:48.178 "data_size": 65536 00:11:48.178 }, 00:11:48.178 { 00:11:48.178 "name": null, 00:11:48.178 "uuid": "b93e2a87-6e5c-41dd-9eae-bf29a170e0c6", 00:11:48.178 "is_configured": false, 00:11:48.178 "data_offset": 0, 00:11:48.178 "data_size": 65536 00:11:48.178 }, 00:11:48.178 { 00:11:48.178 "name": null, 00:11:48.178 "uuid": "2537cbb0-f47d-45f0-9905-b5bf8418bffa", 00:11:48.178 "is_configured": false, 00:11:48.178 "data_offset": 0, 00:11:48.178 "data_size": 65536 00:11:48.178 } 00:11:48.178 ] 00:11:48.178 }' 00:11:48.178 18:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:48.178 18:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.437 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:48.437 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.696 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:48.696 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:48.956 [2024-07-24 18:14:57.337596] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:48.956 "name": "Existed_Raid", 00:11:48.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.956 "strip_size_kb": 64, 00:11:48.956 "state": "configuring", 00:11:48.956 "raid_level": "raid0", 00:11:48.956 "superblock": false, 00:11:48.956 "num_base_bdevs": 3, 00:11:48.956 "num_base_bdevs_discovered": 2, 00:11:48.956 "num_base_bdevs_operational": 3, 00:11:48.956 "base_bdevs_list": [ 00:11:48.956 { 00:11:48.956 "name": "BaseBdev1", 00:11:48.956 "uuid": "b4a13231-7fd0-440a-b6f6-962980252391", 00:11:48.956 "is_configured": true, 00:11:48.956 "data_offset": 0, 00:11:48.956 "data_size": 65536 00:11:48.956 }, 00:11:48.956 { 00:11:48.956 "name": null, 00:11:48.956 "uuid": "b93e2a87-6e5c-41dd-9eae-bf29a170e0c6", 00:11:48.956 "is_configured": false, 00:11:48.956 "data_offset": 0, 00:11:48.956 "data_size": 65536 00:11:48.956 }, 00:11:48.956 { 00:11:48.956 "name": "BaseBdev3", 00:11:48.956 "uuid": "2537cbb0-f47d-45f0-9905-b5bf8418bffa", 00:11:48.956 "is_configured": true, 00:11:48.956 "data_offset": 0, 00:11:48.956 "data_size": 65536 00:11:48.956 } 00:11:48.956 ] 00:11:48.956 }' 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:48.956 18:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.524 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.524 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:49.783 [2024-07-24 18:14:58.332251] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.783 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.042 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.042 "name": "Existed_Raid", 00:11:50.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.042 "strip_size_kb": 64, 00:11:50.042 "state": "configuring", 00:11:50.042 "raid_level": "raid0", 00:11:50.042 "superblock": false, 00:11:50.042 "num_base_bdevs": 3, 00:11:50.042 "num_base_bdevs_discovered": 1, 00:11:50.042 "num_base_bdevs_operational": 3, 00:11:50.042 "base_bdevs_list": [ 00:11:50.042 { 00:11:50.042 "name": null, 00:11:50.042 "uuid": "b4a13231-7fd0-440a-b6f6-962980252391", 00:11:50.042 "is_configured": false, 00:11:50.042 "data_offset": 0, 00:11:50.042 "data_size": 65536 00:11:50.042 }, 00:11:50.042 { 00:11:50.042 "name": null, 00:11:50.042 "uuid": "b93e2a87-6e5c-41dd-9eae-bf29a170e0c6", 00:11:50.042 "is_configured": false, 00:11:50.042 "data_offset": 0, 00:11:50.042 "data_size": 65536 00:11:50.042 }, 00:11:50.042 { 00:11:50.042 "name": "BaseBdev3", 00:11:50.042 "uuid": "2537cbb0-f47d-45f0-9905-b5bf8418bffa", 00:11:50.042 "is_configured": true, 00:11:50.042 "data_offset": 0, 00:11:50.042 "data_size": 65536 00:11:50.042 } 00:11:50.042 ] 00:11:50.042 }' 00:11:50.042 18:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.042 18:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.610 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.610 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:50.610 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:50.610 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:50.870 [2024-07-24 18:14:59.352664] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.870 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:50.870 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:50.870 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:50.870 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:50.870 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:50.870 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:50.870 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:50.870 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:50.870 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:50.870 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:50.870 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.870 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.129 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:51.129 "name": "Existed_Raid", 00:11:51.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.129 "strip_size_kb": 64, 00:11:51.129 "state": "configuring", 00:11:51.129 "raid_level": "raid0", 00:11:51.129 "superblock": false, 00:11:51.129 "num_base_bdevs": 3, 00:11:51.129 "num_base_bdevs_discovered": 2, 00:11:51.129 "num_base_bdevs_operational": 3, 00:11:51.129 "base_bdevs_list": [ 00:11:51.129 { 00:11:51.129 "name": null, 00:11:51.129 "uuid": "b4a13231-7fd0-440a-b6f6-962980252391", 00:11:51.129 "is_configured": false, 00:11:51.129 "data_offset": 0, 00:11:51.129 "data_size": 65536 00:11:51.129 }, 00:11:51.129 { 00:11:51.129 "name": "BaseBdev2", 00:11:51.129 "uuid": "b93e2a87-6e5c-41dd-9eae-bf29a170e0c6", 00:11:51.129 "is_configured": true, 00:11:51.129 "data_offset": 0, 00:11:51.129 "data_size": 65536 00:11:51.129 }, 00:11:51.129 { 00:11:51.129 "name": "BaseBdev3", 00:11:51.129 "uuid": "2537cbb0-f47d-45f0-9905-b5bf8418bffa", 00:11:51.129 "is_configured": true, 00:11:51.129 "data_offset": 0, 00:11:51.129 "data_size": 65536 00:11:51.129 } 00:11:51.129 ] 00:11:51.129 }' 00:11:51.129 18:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:51.129 18:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.697 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:51.697 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.697 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:51.697 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.697 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:51.956 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b4a13231-7fd0-440a-b6f6-962980252391 00:11:51.956 [2024-07-24 18:15:00.530489] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:51.956 [2024-07-24 18:15:00.530518] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x14acce0 00:11:51.956 [2024-07-24 18:15:00.530527] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:51.956 [2024-07-24 18:15:00.530658] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1653420 00:11:51.956 [2024-07-24 18:15:00.530738] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14acce0 00:11:51.956 [2024-07-24 18:15:00.530744] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x14acce0 00:11:51.956 [2024-07-24 18:15:00.530872] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.956 NewBaseBdev 00:11:51.956 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:51.956 18:15:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:51.956 18:15:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:51.956 18:15:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:51.956 18:15:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:51.956 18:15:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:51.956 18:15:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:52.215 18:15:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:52.481 [ 00:11:52.481 { 00:11:52.481 "name": "NewBaseBdev", 00:11:52.481 "aliases": [ 00:11:52.481 "b4a13231-7fd0-440a-b6f6-962980252391" 00:11:52.481 ], 00:11:52.481 "product_name": "Malloc disk", 00:11:52.481 "block_size": 512, 00:11:52.481 "num_blocks": 65536, 00:11:52.481 "uuid": "b4a13231-7fd0-440a-b6f6-962980252391", 00:11:52.481 "assigned_rate_limits": { 00:11:52.481 "rw_ios_per_sec": 0, 00:11:52.481 "rw_mbytes_per_sec": 0, 00:11:52.481 "r_mbytes_per_sec": 0, 00:11:52.481 "w_mbytes_per_sec": 0 00:11:52.481 }, 00:11:52.481 "claimed": true, 00:11:52.481 "claim_type": "exclusive_write", 00:11:52.481 "zoned": false, 00:11:52.481 "supported_io_types": { 00:11:52.481 "read": true, 00:11:52.481 "write": true, 00:11:52.481 "unmap": true, 00:11:52.481 "flush": true, 00:11:52.481 "reset": true, 00:11:52.481 "nvme_admin": false, 00:11:52.481 "nvme_io": false, 00:11:52.481 "nvme_io_md": false, 00:11:52.481 "write_zeroes": true, 00:11:52.481 "zcopy": true, 00:11:52.481 "get_zone_info": false, 00:11:52.481 "zone_management": false, 00:11:52.481 "zone_append": false, 00:11:52.481 "compare": false, 00:11:52.481 "compare_and_write": false, 00:11:52.481 "abort": true, 00:11:52.481 "seek_hole": false, 00:11:52.481 "seek_data": false, 00:11:52.481 "copy": true, 00:11:52.481 "nvme_iov_md": false 00:11:52.481 }, 00:11:52.481 "memory_domains": [ 00:11:52.481 { 00:11:52.481 "dma_device_id": "system", 00:11:52.481 "dma_device_type": 1 00:11:52.481 }, 00:11:52.481 { 00:11:52.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.481 "dma_device_type": 2 00:11:52.481 } 00:11:52.481 ], 00:11:52.481 "driver_specific": {} 00:11:52.481 } 00:11:52.481 ] 00:11:52.481 18:15:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:52.481 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:52.481 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:52.481 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:52.481 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:52.481 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:52.481 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:52.481 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:52.481 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:52.481 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:52.481 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:52.481 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.481 18:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.481 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:52.481 "name": "Existed_Raid", 00:11:52.481 "uuid": "94e08fd7-e2bf-4310-ba87-8eb6b25b9cd3", 00:11:52.481 "strip_size_kb": 64, 00:11:52.481 "state": "online", 00:11:52.481 "raid_level": "raid0", 00:11:52.481 "superblock": false, 00:11:52.481 "num_base_bdevs": 3, 00:11:52.481 "num_base_bdevs_discovered": 3, 00:11:52.481 "num_base_bdevs_operational": 3, 00:11:52.481 "base_bdevs_list": [ 00:11:52.481 { 00:11:52.481 "name": "NewBaseBdev", 00:11:52.481 "uuid": "b4a13231-7fd0-440a-b6f6-962980252391", 00:11:52.481 "is_configured": true, 00:11:52.481 "data_offset": 0, 00:11:52.481 "data_size": 65536 00:11:52.481 }, 00:11:52.481 { 00:11:52.481 "name": "BaseBdev2", 00:11:52.481 "uuid": "b93e2a87-6e5c-41dd-9eae-bf29a170e0c6", 00:11:52.481 "is_configured": true, 00:11:52.481 "data_offset": 0, 00:11:52.481 "data_size": 65536 00:11:52.481 }, 00:11:52.481 { 00:11:52.481 "name": "BaseBdev3", 00:11:52.481 "uuid": "2537cbb0-f47d-45f0-9905-b5bf8418bffa", 00:11:52.481 "is_configured": true, 00:11:52.481 "data_offset": 0, 00:11:52.481 "data_size": 65536 00:11:52.481 } 00:11:52.481 ] 00:11:52.481 }' 00:11:52.481 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:52.481 18:15:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.048 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:53.048 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:53.048 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:53.048 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:53.048 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:53.048 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:53.048 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:53.048 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:53.308 [2024-07-24 18:15:01.673688] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.308 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:53.308 "name": "Existed_Raid", 00:11:53.308 "aliases": [ 00:11:53.308 "94e08fd7-e2bf-4310-ba87-8eb6b25b9cd3" 00:11:53.308 ], 00:11:53.308 "product_name": "Raid Volume", 00:11:53.308 "block_size": 512, 00:11:53.308 "num_blocks": 196608, 00:11:53.308 "uuid": "94e08fd7-e2bf-4310-ba87-8eb6b25b9cd3", 00:11:53.308 "assigned_rate_limits": { 00:11:53.308 "rw_ios_per_sec": 0, 00:11:53.308 "rw_mbytes_per_sec": 0, 00:11:53.308 "r_mbytes_per_sec": 0, 00:11:53.308 "w_mbytes_per_sec": 0 00:11:53.308 }, 00:11:53.308 "claimed": false, 00:11:53.308 "zoned": false, 00:11:53.308 "supported_io_types": { 00:11:53.308 "read": true, 00:11:53.308 "write": true, 00:11:53.308 "unmap": true, 00:11:53.308 "flush": true, 00:11:53.308 "reset": true, 00:11:53.308 "nvme_admin": false, 00:11:53.308 "nvme_io": false, 00:11:53.308 "nvme_io_md": false, 00:11:53.308 "write_zeroes": true, 00:11:53.308 "zcopy": false, 00:11:53.308 "get_zone_info": false, 00:11:53.308 "zone_management": false, 00:11:53.308 "zone_append": false, 00:11:53.308 "compare": false, 00:11:53.308 "compare_and_write": false, 00:11:53.308 "abort": false, 00:11:53.308 "seek_hole": false, 00:11:53.308 "seek_data": false, 00:11:53.308 "copy": false, 00:11:53.308 "nvme_iov_md": false 00:11:53.308 }, 00:11:53.308 "memory_domains": [ 00:11:53.308 { 00:11:53.308 "dma_device_id": "system", 00:11:53.308 "dma_device_type": 1 00:11:53.308 }, 00:11:53.308 { 00:11:53.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.308 "dma_device_type": 2 00:11:53.308 }, 00:11:53.308 { 00:11:53.308 "dma_device_id": "system", 00:11:53.308 "dma_device_type": 1 00:11:53.308 }, 00:11:53.308 { 00:11:53.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.308 "dma_device_type": 2 00:11:53.308 }, 00:11:53.308 { 00:11:53.308 "dma_device_id": "system", 00:11:53.308 "dma_device_type": 1 00:11:53.308 }, 00:11:53.308 { 00:11:53.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.308 "dma_device_type": 2 00:11:53.308 } 00:11:53.308 ], 00:11:53.308 "driver_specific": { 00:11:53.308 "raid": { 00:11:53.308 "uuid": "94e08fd7-e2bf-4310-ba87-8eb6b25b9cd3", 00:11:53.308 "strip_size_kb": 64, 00:11:53.308 "state": "online", 00:11:53.308 "raid_level": "raid0", 00:11:53.308 "superblock": false, 00:11:53.308 "num_base_bdevs": 3, 00:11:53.308 "num_base_bdevs_discovered": 3, 00:11:53.308 "num_base_bdevs_operational": 3, 00:11:53.308 "base_bdevs_list": [ 00:11:53.308 { 00:11:53.308 "name": "NewBaseBdev", 00:11:53.308 "uuid": "b4a13231-7fd0-440a-b6f6-962980252391", 00:11:53.308 "is_configured": true, 00:11:53.308 "data_offset": 0, 00:11:53.308 "data_size": 65536 00:11:53.308 }, 00:11:53.308 { 00:11:53.308 "name": "BaseBdev2", 00:11:53.308 "uuid": "b93e2a87-6e5c-41dd-9eae-bf29a170e0c6", 00:11:53.308 "is_configured": true, 00:11:53.308 "data_offset": 0, 00:11:53.308 "data_size": 65536 00:11:53.308 }, 00:11:53.308 { 00:11:53.308 "name": "BaseBdev3", 00:11:53.308 "uuid": "2537cbb0-f47d-45f0-9905-b5bf8418bffa", 00:11:53.308 "is_configured": true, 00:11:53.308 "data_offset": 0, 00:11:53.308 "data_size": 65536 00:11:53.308 } 00:11:53.308 ] 00:11:53.308 } 00:11:53.308 } 00:11:53.308 }' 00:11:53.308 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.308 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:53.308 BaseBdev2 00:11:53.308 BaseBdev3' 00:11:53.308 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:53.308 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:53.308 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:53.308 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:53.308 "name": "NewBaseBdev", 00:11:53.308 "aliases": [ 00:11:53.308 "b4a13231-7fd0-440a-b6f6-962980252391" 00:11:53.308 ], 00:11:53.308 "product_name": "Malloc disk", 00:11:53.308 "block_size": 512, 00:11:53.308 "num_blocks": 65536, 00:11:53.308 "uuid": "b4a13231-7fd0-440a-b6f6-962980252391", 00:11:53.308 "assigned_rate_limits": { 00:11:53.308 "rw_ios_per_sec": 0, 00:11:53.308 "rw_mbytes_per_sec": 0, 00:11:53.308 "r_mbytes_per_sec": 0, 00:11:53.308 "w_mbytes_per_sec": 0 00:11:53.308 }, 00:11:53.308 "claimed": true, 00:11:53.308 "claim_type": "exclusive_write", 00:11:53.308 "zoned": false, 00:11:53.308 "supported_io_types": { 00:11:53.308 "read": true, 00:11:53.308 "write": true, 00:11:53.308 "unmap": true, 00:11:53.308 "flush": true, 00:11:53.308 "reset": true, 00:11:53.308 "nvme_admin": false, 00:11:53.308 "nvme_io": false, 00:11:53.308 "nvme_io_md": false, 00:11:53.308 "write_zeroes": true, 00:11:53.308 "zcopy": true, 00:11:53.308 "get_zone_info": false, 00:11:53.308 "zone_management": false, 00:11:53.308 "zone_append": false, 00:11:53.308 "compare": false, 00:11:53.308 "compare_and_write": false, 00:11:53.308 "abort": true, 00:11:53.308 "seek_hole": false, 00:11:53.308 "seek_data": false, 00:11:53.308 "copy": true, 00:11:53.308 "nvme_iov_md": false 00:11:53.308 }, 00:11:53.308 "memory_domains": [ 00:11:53.308 { 00:11:53.308 "dma_device_id": "system", 00:11:53.308 "dma_device_type": 1 00:11:53.308 }, 00:11:53.308 { 00:11:53.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.308 "dma_device_type": 2 00:11:53.308 } 00:11:53.308 ], 00:11:53.308 "driver_specific": {} 00:11:53.308 }' 00:11:53.308 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:53.568 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:53.568 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:53.568 18:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:53.568 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:53.568 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:53.568 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:53.568 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:53.568 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:53.568 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:53.827 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:53.827 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:53.827 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:53.827 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:53.827 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:53.827 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:53.827 "name": "BaseBdev2", 00:11:53.827 "aliases": [ 00:11:53.827 "b93e2a87-6e5c-41dd-9eae-bf29a170e0c6" 00:11:53.827 ], 00:11:53.827 "product_name": "Malloc disk", 00:11:53.827 "block_size": 512, 00:11:53.827 "num_blocks": 65536, 00:11:53.827 "uuid": "b93e2a87-6e5c-41dd-9eae-bf29a170e0c6", 00:11:53.827 "assigned_rate_limits": { 00:11:53.827 "rw_ios_per_sec": 0, 00:11:53.827 "rw_mbytes_per_sec": 0, 00:11:53.827 "r_mbytes_per_sec": 0, 00:11:53.827 "w_mbytes_per_sec": 0 00:11:53.827 }, 00:11:53.827 "claimed": true, 00:11:53.827 "claim_type": "exclusive_write", 00:11:53.827 "zoned": false, 00:11:53.827 "supported_io_types": { 00:11:53.827 "read": true, 00:11:53.827 "write": true, 00:11:53.827 "unmap": true, 00:11:53.827 "flush": true, 00:11:53.827 "reset": true, 00:11:53.827 "nvme_admin": false, 00:11:53.827 "nvme_io": false, 00:11:53.827 "nvme_io_md": false, 00:11:53.827 "write_zeroes": true, 00:11:53.827 "zcopy": true, 00:11:53.827 "get_zone_info": false, 00:11:53.827 "zone_management": false, 00:11:53.827 "zone_append": false, 00:11:53.827 "compare": false, 00:11:53.827 "compare_and_write": false, 00:11:53.827 "abort": true, 00:11:53.827 "seek_hole": false, 00:11:53.827 "seek_data": false, 00:11:53.827 "copy": true, 00:11:53.827 "nvme_iov_md": false 00:11:53.827 }, 00:11:53.827 "memory_domains": [ 00:11:53.827 { 00:11:53.827 "dma_device_id": "system", 00:11:53.827 "dma_device_type": 1 00:11:53.827 }, 00:11:53.827 { 00:11:53.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.827 "dma_device_type": 2 00:11:53.827 } 00:11:53.827 ], 00:11:53.827 "driver_specific": {} 00:11:53.827 }' 00:11:53.827 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.087 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.087 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:54.087 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.087 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.087 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:54.087 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.087 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.087 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:54.087 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.087 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.361 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:54.361 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:54.361 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:54.361 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:54.361 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:54.361 "name": "BaseBdev3", 00:11:54.361 "aliases": [ 00:11:54.361 "2537cbb0-f47d-45f0-9905-b5bf8418bffa" 00:11:54.361 ], 00:11:54.361 "product_name": "Malloc disk", 00:11:54.361 "block_size": 512, 00:11:54.361 "num_blocks": 65536, 00:11:54.361 "uuid": "2537cbb0-f47d-45f0-9905-b5bf8418bffa", 00:11:54.361 "assigned_rate_limits": { 00:11:54.361 "rw_ios_per_sec": 0, 00:11:54.361 "rw_mbytes_per_sec": 0, 00:11:54.361 "r_mbytes_per_sec": 0, 00:11:54.361 "w_mbytes_per_sec": 0 00:11:54.361 }, 00:11:54.361 "claimed": true, 00:11:54.361 "claim_type": "exclusive_write", 00:11:54.361 "zoned": false, 00:11:54.361 "supported_io_types": { 00:11:54.361 "read": true, 00:11:54.361 "write": true, 00:11:54.361 "unmap": true, 00:11:54.361 "flush": true, 00:11:54.361 "reset": true, 00:11:54.361 "nvme_admin": false, 00:11:54.361 "nvme_io": false, 00:11:54.361 "nvme_io_md": false, 00:11:54.361 "write_zeroes": true, 00:11:54.361 "zcopy": true, 00:11:54.361 "get_zone_info": false, 00:11:54.361 "zone_management": false, 00:11:54.361 "zone_append": false, 00:11:54.361 "compare": false, 00:11:54.361 "compare_and_write": false, 00:11:54.361 "abort": true, 00:11:54.361 "seek_hole": false, 00:11:54.361 "seek_data": false, 00:11:54.361 "copy": true, 00:11:54.361 "nvme_iov_md": false 00:11:54.361 }, 00:11:54.361 "memory_domains": [ 00:11:54.361 { 00:11:54.361 "dma_device_id": "system", 00:11:54.361 "dma_device_type": 1 00:11:54.361 }, 00:11:54.361 { 00:11:54.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.361 "dma_device_type": 2 00:11:54.361 } 00:11:54.361 ], 00:11:54.361 "driver_specific": {} 00:11:54.361 }' 00:11:54.361 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.361 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.361 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:54.361 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.644 18:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.644 18:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:54.644 18:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.644 18:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.644 18:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:54.644 18:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.644 18:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.644 18:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:54.644 18:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:54.912 [2024-07-24 18:15:03.333808] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.912 [2024-07-24 18:15:03.333826] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.912 [2024-07-24 18:15:03.333863] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.912 [2024-07-24 18:15:03.333897] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.912 [2024-07-24 18:15:03.333904] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14acce0 name Existed_Raid, state offline 00:11:54.912 18:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2171292 00:11:54.912 18:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2171292 ']' 00:11:54.912 18:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2171292 00:11:54.912 18:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:54.912 18:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:54.912 18:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2171292 00:11:54.912 18:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:54.912 18:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:54.912 18:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2171292' 00:11:54.912 killing process with pid 2171292 00:11:54.912 18:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2171292 00:11:54.912 [2024-07-24 18:15:03.393914] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:54.912 18:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2171292 00:11:54.912 [2024-07-24 18:15:03.415894] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:55.171 00:11:55.171 real 0m21.299s 00:11:55.171 user 0m38.946s 00:11:55.171 sys 0m4.085s 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.171 ************************************ 00:11:55.171 END TEST raid_state_function_test 00:11:55.171 ************************************ 00:11:55.171 18:15:03 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:11:55.171 18:15:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:55.171 18:15:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.171 18:15:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.171 ************************************ 00:11:55.171 START TEST raid_state_function_test_sb 00:11:55.171 ************************************ 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2175770 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2175770' 00:11:55.171 Process raid pid: 2175770 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2175770 /var/tmp/spdk-raid.sock 00:11:55.171 18:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2175770 ']' 00:11:55.172 18:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:55.172 18:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:55.172 18:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:55.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:55.172 18:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:55.172 18:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.172 [2024-07-24 18:15:03.734157] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:11:55.172 [2024-07-24 18:15:03.734201] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:01.0 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:01.1 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:01.2 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:01.3 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:01.4 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:01.5 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:01.6 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:01.7 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:02.0 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:02.1 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:02.2 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:02.3 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:02.4 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:02.5 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:02.6 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b3:02.7 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:01.0 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:01.1 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:01.2 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:01.3 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:01.4 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:01.5 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:01.6 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:01.7 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:02.0 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:02.1 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:02.2 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:02.3 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:02.4 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:02.5 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:02.6 cannot be used 00:11:55.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:55.431 EAL: Requested device 0000:b5:02.7 cannot be used 00:11:55.431 [2024-07-24 18:15:03.828510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.431 [2024-07-24 18:15:03.902137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.431 [2024-07-24 18:15:03.954862] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.431 [2024-07-24 18:15:03.954888] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.000 18:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:56.000 18:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:56.000 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:56.259 [2024-07-24 18:15:04.682021] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.259 [2024-07-24 18:15:04.682052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.259 [2024-07-24 18:15:04.682059] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.259 [2024-07-24 18:15:04.682066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.259 [2024-07-24 18:15:04.682072] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.259 [2024-07-24 18:15:04.682079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.259 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:56.259 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:56.259 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:56.259 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:56.259 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:56.259 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:56.259 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:56.259 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:56.259 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:56.259 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:56.259 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.259 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.518 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:56.518 "name": "Existed_Raid", 00:11:56.518 "uuid": "ccdaf8c9-1997-475e-8030-6b0fffda499c", 00:11:56.518 "strip_size_kb": 64, 00:11:56.518 "state": "configuring", 00:11:56.518 "raid_level": "raid0", 00:11:56.518 "superblock": true, 00:11:56.518 "num_base_bdevs": 3, 00:11:56.518 "num_base_bdevs_discovered": 0, 00:11:56.518 "num_base_bdevs_operational": 3, 00:11:56.518 "base_bdevs_list": [ 00:11:56.518 { 00:11:56.518 "name": "BaseBdev1", 00:11:56.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.518 "is_configured": false, 00:11:56.518 "data_offset": 0, 00:11:56.518 "data_size": 0 00:11:56.518 }, 00:11:56.518 { 00:11:56.518 "name": "BaseBdev2", 00:11:56.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.518 "is_configured": false, 00:11:56.518 "data_offset": 0, 00:11:56.518 "data_size": 0 00:11:56.518 }, 00:11:56.518 { 00:11:56.518 "name": "BaseBdev3", 00:11:56.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.518 "is_configured": false, 00:11:56.518 "data_offset": 0, 00:11:56.518 "data_size": 0 00:11:56.518 } 00:11:56.518 ] 00:11:56.518 }' 00:11:56.518 18:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:56.518 18:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.777 18:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:57.036 [2024-07-24 18:15:05.520097] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:57.036 [2024-07-24 18:15:05.520119] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10141c0 name Existed_Raid, state configuring 00:11:57.036 18:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:57.295 [2024-07-24 18:15:05.692552] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:57.295 [2024-07-24 18:15:05.692571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:57.295 [2024-07-24 18:15:05.692577] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:57.295 [2024-07-24 18:15:05.692584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:57.295 [2024-07-24 18:15:05.692589] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:57.295 [2024-07-24 18:15:05.692596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:57.295 18:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:57.295 [2024-07-24 18:15:05.877612] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.295 BaseBdev1 00:11:57.554 18:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:57.554 18:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:57.554 18:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:57.554 18:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:57.554 18:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:57.554 18:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:57.554 18:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:57.554 18:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:57.812 [ 00:11:57.812 { 00:11:57.812 "name": "BaseBdev1", 00:11:57.812 "aliases": [ 00:11:57.812 "dfc7fb6f-9531-4dc4-b575-d2954d82155d" 00:11:57.812 ], 00:11:57.812 "product_name": "Malloc disk", 00:11:57.812 "block_size": 512, 00:11:57.812 "num_blocks": 65536, 00:11:57.812 "uuid": "dfc7fb6f-9531-4dc4-b575-d2954d82155d", 00:11:57.812 "assigned_rate_limits": { 00:11:57.812 "rw_ios_per_sec": 0, 00:11:57.812 "rw_mbytes_per_sec": 0, 00:11:57.812 "r_mbytes_per_sec": 0, 00:11:57.812 "w_mbytes_per_sec": 0 00:11:57.812 }, 00:11:57.812 "claimed": true, 00:11:57.812 "claim_type": "exclusive_write", 00:11:57.812 "zoned": false, 00:11:57.812 "supported_io_types": { 00:11:57.812 "read": true, 00:11:57.812 "write": true, 00:11:57.812 "unmap": true, 00:11:57.812 "flush": true, 00:11:57.812 "reset": true, 00:11:57.812 "nvme_admin": false, 00:11:57.812 "nvme_io": false, 00:11:57.812 "nvme_io_md": false, 00:11:57.812 "write_zeroes": true, 00:11:57.812 "zcopy": true, 00:11:57.812 "get_zone_info": false, 00:11:57.812 "zone_management": false, 00:11:57.812 "zone_append": false, 00:11:57.812 "compare": false, 00:11:57.812 "compare_and_write": false, 00:11:57.812 "abort": true, 00:11:57.812 "seek_hole": false, 00:11:57.812 "seek_data": false, 00:11:57.812 "copy": true, 00:11:57.812 "nvme_iov_md": false 00:11:57.812 }, 00:11:57.812 "memory_domains": [ 00:11:57.812 { 00:11:57.812 "dma_device_id": "system", 00:11:57.812 "dma_device_type": 1 00:11:57.812 }, 00:11:57.812 { 00:11:57.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.813 "dma_device_type": 2 00:11:57.813 } 00:11:57.813 ], 00:11:57.813 "driver_specific": {} 00:11:57.813 } 00:11:57.813 ] 00:11:57.813 18:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:57.813 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:57.813 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:57.813 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:57.813 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:57.813 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:57.813 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:57.813 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:57.813 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:57.813 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:57.813 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:57.813 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.813 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.071 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:58.071 "name": "Existed_Raid", 00:11:58.071 "uuid": "98ed20c2-1b68-4095-99eb-c37b2aa6c3bd", 00:11:58.071 "strip_size_kb": 64, 00:11:58.071 "state": "configuring", 00:11:58.071 "raid_level": "raid0", 00:11:58.071 "superblock": true, 00:11:58.071 "num_base_bdevs": 3, 00:11:58.071 "num_base_bdevs_discovered": 1, 00:11:58.071 "num_base_bdevs_operational": 3, 00:11:58.071 "base_bdevs_list": [ 00:11:58.071 { 00:11:58.071 "name": "BaseBdev1", 00:11:58.071 "uuid": "dfc7fb6f-9531-4dc4-b575-d2954d82155d", 00:11:58.071 "is_configured": true, 00:11:58.071 "data_offset": 2048, 00:11:58.071 "data_size": 63488 00:11:58.071 }, 00:11:58.071 { 00:11:58.071 "name": "BaseBdev2", 00:11:58.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.071 "is_configured": false, 00:11:58.071 "data_offset": 0, 00:11:58.071 "data_size": 0 00:11:58.071 }, 00:11:58.071 { 00:11:58.071 "name": "BaseBdev3", 00:11:58.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.071 "is_configured": false, 00:11:58.071 "data_offset": 0, 00:11:58.071 "data_size": 0 00:11:58.071 } 00:11:58.071 ] 00:11:58.071 }' 00:11:58.071 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:58.071 18:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.330 18:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:58.589 [2024-07-24 18:15:07.012619] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:58.589 [2024-07-24 18:15:07.012667] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1013a90 name Existed_Raid, state configuring 00:11:58.589 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:58.849 [2024-07-24 18:15:07.189106] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.849 [2024-07-24 18:15:07.190125] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:58.849 [2024-07-24 18:15:07.190151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:58.849 [2024-07-24 18:15:07.190157] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:58.849 [2024-07-24 18:15:07.190164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:58.849 "name": "Existed_Raid", 00:11:58.849 "uuid": "3787b9c3-fab3-4861-bcff-d359a3d6b081", 00:11:58.849 "strip_size_kb": 64, 00:11:58.849 "state": "configuring", 00:11:58.849 "raid_level": "raid0", 00:11:58.849 "superblock": true, 00:11:58.849 "num_base_bdevs": 3, 00:11:58.849 "num_base_bdevs_discovered": 1, 00:11:58.849 "num_base_bdevs_operational": 3, 00:11:58.849 "base_bdevs_list": [ 00:11:58.849 { 00:11:58.849 "name": "BaseBdev1", 00:11:58.849 "uuid": "dfc7fb6f-9531-4dc4-b575-d2954d82155d", 00:11:58.849 "is_configured": true, 00:11:58.849 "data_offset": 2048, 00:11:58.849 "data_size": 63488 00:11:58.849 }, 00:11:58.849 { 00:11:58.849 "name": "BaseBdev2", 00:11:58.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.849 "is_configured": false, 00:11:58.849 "data_offset": 0, 00:11:58.849 "data_size": 0 00:11:58.849 }, 00:11:58.849 { 00:11:58.849 "name": "BaseBdev3", 00:11:58.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.849 "is_configured": false, 00:11:58.849 "data_offset": 0, 00:11:58.849 "data_size": 0 00:11:58.849 } 00:11:58.849 ] 00:11:58.849 }' 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:58.849 18:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.418 18:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:59.677 [2024-07-24 18:15:08.054083] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.677 BaseBdev2 00:11:59.677 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:59.677 18:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:59.677 18:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:59.677 18:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:59.677 18:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:59.677 18:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:59.678 18:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:59.678 18:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:59.937 [ 00:11:59.937 { 00:11:59.937 "name": "BaseBdev2", 00:11:59.937 "aliases": [ 00:11:59.937 "b2216bf6-8b7f-4dde-9fb2-3f8556708e16" 00:11:59.937 ], 00:11:59.937 "product_name": "Malloc disk", 00:11:59.937 "block_size": 512, 00:11:59.937 "num_blocks": 65536, 00:11:59.937 "uuid": "b2216bf6-8b7f-4dde-9fb2-3f8556708e16", 00:11:59.937 "assigned_rate_limits": { 00:11:59.937 "rw_ios_per_sec": 0, 00:11:59.937 "rw_mbytes_per_sec": 0, 00:11:59.937 "r_mbytes_per_sec": 0, 00:11:59.937 "w_mbytes_per_sec": 0 00:11:59.937 }, 00:11:59.937 "claimed": true, 00:11:59.937 "claim_type": "exclusive_write", 00:11:59.937 "zoned": false, 00:11:59.937 "supported_io_types": { 00:11:59.937 "read": true, 00:11:59.937 "write": true, 00:11:59.937 "unmap": true, 00:11:59.937 "flush": true, 00:11:59.937 "reset": true, 00:11:59.937 "nvme_admin": false, 00:11:59.937 "nvme_io": false, 00:11:59.937 "nvme_io_md": false, 00:11:59.937 "write_zeroes": true, 00:11:59.937 "zcopy": true, 00:11:59.937 "get_zone_info": false, 00:11:59.937 "zone_management": false, 00:11:59.937 "zone_append": false, 00:11:59.937 "compare": false, 00:11:59.937 "compare_and_write": false, 00:11:59.937 "abort": true, 00:11:59.937 "seek_hole": false, 00:11:59.937 "seek_data": false, 00:11:59.937 "copy": true, 00:11:59.937 "nvme_iov_md": false 00:11:59.937 }, 00:11:59.937 "memory_domains": [ 00:11:59.937 { 00:11:59.937 "dma_device_id": "system", 00:11:59.937 "dma_device_type": 1 00:11:59.937 }, 00:11:59.937 { 00:11:59.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.937 "dma_device_type": 2 00:11:59.937 } 00:11:59.937 ], 00:11:59.937 "driver_specific": {} 00:11:59.937 } 00:11:59.937 ] 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.937 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.197 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:00.197 "name": "Existed_Raid", 00:12:00.197 "uuid": "3787b9c3-fab3-4861-bcff-d359a3d6b081", 00:12:00.197 "strip_size_kb": 64, 00:12:00.197 "state": "configuring", 00:12:00.197 "raid_level": "raid0", 00:12:00.197 "superblock": true, 00:12:00.197 "num_base_bdevs": 3, 00:12:00.197 "num_base_bdevs_discovered": 2, 00:12:00.197 "num_base_bdevs_operational": 3, 00:12:00.197 "base_bdevs_list": [ 00:12:00.197 { 00:12:00.197 "name": "BaseBdev1", 00:12:00.197 "uuid": "dfc7fb6f-9531-4dc4-b575-d2954d82155d", 00:12:00.197 "is_configured": true, 00:12:00.197 "data_offset": 2048, 00:12:00.197 "data_size": 63488 00:12:00.197 }, 00:12:00.197 { 00:12:00.197 "name": "BaseBdev2", 00:12:00.197 "uuid": "b2216bf6-8b7f-4dde-9fb2-3f8556708e16", 00:12:00.197 "is_configured": true, 00:12:00.197 "data_offset": 2048, 00:12:00.197 "data_size": 63488 00:12:00.197 }, 00:12:00.197 { 00:12:00.197 "name": "BaseBdev3", 00:12:00.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.197 "is_configured": false, 00:12:00.197 "data_offset": 0, 00:12:00.197 "data_size": 0 00:12:00.197 } 00:12:00.197 ] 00:12:00.197 }' 00:12:00.197 18:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:00.197 18:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.766 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:00.766 [2024-07-24 18:15:09.251931] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.766 [2024-07-24 18:15:09.252042] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1014980 00:12:00.766 [2024-07-24 18:15:09.252052] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:00.766 [2024-07-24 18:15:09.252174] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1014650 00:12:00.766 [2024-07-24 18:15:09.252258] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1014980 00:12:00.766 [2024-07-24 18:15:09.252264] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1014980 00:12:00.766 [2024-07-24 18:15:09.252331] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.766 BaseBdev3 00:12:00.766 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:00.766 18:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:00.766 18:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:00.766 18:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:00.766 18:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:00.766 18:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:00.766 18:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:01.026 [ 00:12:01.026 { 00:12:01.026 "name": "BaseBdev3", 00:12:01.026 "aliases": [ 00:12:01.026 "557cc3fb-a5f8-4569-9d3f-5c818b780b98" 00:12:01.026 ], 00:12:01.026 "product_name": "Malloc disk", 00:12:01.026 "block_size": 512, 00:12:01.026 "num_blocks": 65536, 00:12:01.026 "uuid": "557cc3fb-a5f8-4569-9d3f-5c818b780b98", 00:12:01.026 "assigned_rate_limits": { 00:12:01.026 "rw_ios_per_sec": 0, 00:12:01.026 "rw_mbytes_per_sec": 0, 00:12:01.026 "r_mbytes_per_sec": 0, 00:12:01.026 "w_mbytes_per_sec": 0 00:12:01.026 }, 00:12:01.026 "claimed": true, 00:12:01.026 "claim_type": "exclusive_write", 00:12:01.026 "zoned": false, 00:12:01.026 "supported_io_types": { 00:12:01.026 "read": true, 00:12:01.026 "write": true, 00:12:01.026 "unmap": true, 00:12:01.026 "flush": true, 00:12:01.026 "reset": true, 00:12:01.026 "nvme_admin": false, 00:12:01.026 "nvme_io": false, 00:12:01.026 "nvme_io_md": false, 00:12:01.026 "write_zeroes": true, 00:12:01.026 "zcopy": true, 00:12:01.026 "get_zone_info": false, 00:12:01.026 "zone_management": false, 00:12:01.026 "zone_append": false, 00:12:01.026 "compare": false, 00:12:01.026 "compare_and_write": false, 00:12:01.026 "abort": true, 00:12:01.026 "seek_hole": false, 00:12:01.026 "seek_data": false, 00:12:01.026 "copy": true, 00:12:01.026 "nvme_iov_md": false 00:12:01.026 }, 00:12:01.026 "memory_domains": [ 00:12:01.026 { 00:12:01.026 "dma_device_id": "system", 00:12:01.026 "dma_device_type": 1 00:12:01.026 }, 00:12:01.026 { 00:12:01.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.026 "dma_device_type": 2 00:12:01.026 } 00:12:01.026 ], 00:12:01.026 "driver_specific": {} 00:12:01.026 } 00:12:01.026 ] 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.026 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.286 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:01.286 "name": "Existed_Raid", 00:12:01.286 "uuid": "3787b9c3-fab3-4861-bcff-d359a3d6b081", 00:12:01.286 "strip_size_kb": 64, 00:12:01.286 "state": "online", 00:12:01.286 "raid_level": "raid0", 00:12:01.286 "superblock": true, 00:12:01.286 "num_base_bdevs": 3, 00:12:01.286 "num_base_bdevs_discovered": 3, 00:12:01.286 "num_base_bdevs_operational": 3, 00:12:01.286 "base_bdevs_list": [ 00:12:01.286 { 00:12:01.286 "name": "BaseBdev1", 00:12:01.286 "uuid": "dfc7fb6f-9531-4dc4-b575-d2954d82155d", 00:12:01.286 "is_configured": true, 00:12:01.286 "data_offset": 2048, 00:12:01.286 "data_size": 63488 00:12:01.286 }, 00:12:01.286 { 00:12:01.286 "name": "BaseBdev2", 00:12:01.286 "uuid": "b2216bf6-8b7f-4dde-9fb2-3f8556708e16", 00:12:01.286 "is_configured": true, 00:12:01.286 "data_offset": 2048, 00:12:01.286 "data_size": 63488 00:12:01.286 }, 00:12:01.286 { 00:12:01.286 "name": "BaseBdev3", 00:12:01.286 "uuid": "557cc3fb-a5f8-4569-9d3f-5c818b780b98", 00:12:01.286 "is_configured": true, 00:12:01.286 "data_offset": 2048, 00:12:01.286 "data_size": 63488 00:12:01.286 } 00:12:01.286 ] 00:12:01.286 }' 00:12:01.286 18:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:01.286 18:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:01.855 [2024-07-24 18:15:10.391064] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:01.855 "name": "Existed_Raid", 00:12:01.855 "aliases": [ 00:12:01.855 "3787b9c3-fab3-4861-bcff-d359a3d6b081" 00:12:01.855 ], 00:12:01.855 "product_name": "Raid Volume", 00:12:01.855 "block_size": 512, 00:12:01.855 "num_blocks": 190464, 00:12:01.855 "uuid": "3787b9c3-fab3-4861-bcff-d359a3d6b081", 00:12:01.855 "assigned_rate_limits": { 00:12:01.855 "rw_ios_per_sec": 0, 00:12:01.855 "rw_mbytes_per_sec": 0, 00:12:01.855 "r_mbytes_per_sec": 0, 00:12:01.855 "w_mbytes_per_sec": 0 00:12:01.855 }, 00:12:01.855 "claimed": false, 00:12:01.855 "zoned": false, 00:12:01.855 "supported_io_types": { 00:12:01.855 "read": true, 00:12:01.855 "write": true, 00:12:01.855 "unmap": true, 00:12:01.855 "flush": true, 00:12:01.855 "reset": true, 00:12:01.855 "nvme_admin": false, 00:12:01.855 "nvme_io": false, 00:12:01.855 "nvme_io_md": false, 00:12:01.855 "write_zeroes": true, 00:12:01.855 "zcopy": false, 00:12:01.855 "get_zone_info": false, 00:12:01.855 "zone_management": false, 00:12:01.855 "zone_append": false, 00:12:01.855 "compare": false, 00:12:01.855 "compare_and_write": false, 00:12:01.855 "abort": false, 00:12:01.855 "seek_hole": false, 00:12:01.855 "seek_data": false, 00:12:01.855 "copy": false, 00:12:01.855 "nvme_iov_md": false 00:12:01.855 }, 00:12:01.855 "memory_domains": [ 00:12:01.855 { 00:12:01.855 "dma_device_id": "system", 00:12:01.855 "dma_device_type": 1 00:12:01.855 }, 00:12:01.855 { 00:12:01.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.855 "dma_device_type": 2 00:12:01.855 }, 00:12:01.855 { 00:12:01.855 "dma_device_id": "system", 00:12:01.855 "dma_device_type": 1 00:12:01.855 }, 00:12:01.855 { 00:12:01.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.855 "dma_device_type": 2 00:12:01.855 }, 00:12:01.855 { 00:12:01.855 "dma_device_id": "system", 00:12:01.855 "dma_device_type": 1 00:12:01.855 }, 00:12:01.855 { 00:12:01.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.855 "dma_device_type": 2 00:12:01.855 } 00:12:01.855 ], 00:12:01.855 "driver_specific": { 00:12:01.855 "raid": { 00:12:01.855 "uuid": "3787b9c3-fab3-4861-bcff-d359a3d6b081", 00:12:01.855 "strip_size_kb": 64, 00:12:01.855 "state": "online", 00:12:01.855 "raid_level": "raid0", 00:12:01.855 "superblock": true, 00:12:01.855 "num_base_bdevs": 3, 00:12:01.855 "num_base_bdevs_discovered": 3, 00:12:01.855 "num_base_bdevs_operational": 3, 00:12:01.855 "base_bdevs_list": [ 00:12:01.855 { 00:12:01.855 "name": "BaseBdev1", 00:12:01.855 "uuid": "dfc7fb6f-9531-4dc4-b575-d2954d82155d", 00:12:01.855 "is_configured": true, 00:12:01.855 "data_offset": 2048, 00:12:01.855 "data_size": 63488 00:12:01.855 }, 00:12:01.855 { 00:12:01.855 "name": "BaseBdev2", 00:12:01.855 "uuid": "b2216bf6-8b7f-4dde-9fb2-3f8556708e16", 00:12:01.855 "is_configured": true, 00:12:01.855 "data_offset": 2048, 00:12:01.855 "data_size": 63488 00:12:01.855 }, 00:12:01.855 { 00:12:01.855 "name": "BaseBdev3", 00:12:01.855 "uuid": "557cc3fb-a5f8-4569-9d3f-5c818b780b98", 00:12:01.855 "is_configured": true, 00:12:01.855 "data_offset": 2048, 00:12:01.855 "data_size": 63488 00:12:01.855 } 00:12:01.855 ] 00:12:01.855 } 00:12:01.855 } 00:12:01.855 }' 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:01.855 BaseBdev2 00:12:01.855 BaseBdev3' 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:01.855 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:02.115 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:02.115 "name": "BaseBdev1", 00:12:02.115 "aliases": [ 00:12:02.115 "dfc7fb6f-9531-4dc4-b575-d2954d82155d" 00:12:02.115 ], 00:12:02.115 "product_name": "Malloc disk", 00:12:02.115 "block_size": 512, 00:12:02.115 "num_blocks": 65536, 00:12:02.115 "uuid": "dfc7fb6f-9531-4dc4-b575-d2954d82155d", 00:12:02.115 "assigned_rate_limits": { 00:12:02.115 "rw_ios_per_sec": 0, 00:12:02.115 "rw_mbytes_per_sec": 0, 00:12:02.115 "r_mbytes_per_sec": 0, 00:12:02.115 "w_mbytes_per_sec": 0 00:12:02.115 }, 00:12:02.115 "claimed": true, 00:12:02.115 "claim_type": "exclusive_write", 00:12:02.115 "zoned": false, 00:12:02.115 "supported_io_types": { 00:12:02.115 "read": true, 00:12:02.115 "write": true, 00:12:02.115 "unmap": true, 00:12:02.115 "flush": true, 00:12:02.115 "reset": true, 00:12:02.115 "nvme_admin": false, 00:12:02.115 "nvme_io": false, 00:12:02.115 "nvme_io_md": false, 00:12:02.115 "write_zeroes": true, 00:12:02.115 "zcopy": true, 00:12:02.115 "get_zone_info": false, 00:12:02.115 "zone_management": false, 00:12:02.115 "zone_append": false, 00:12:02.115 "compare": false, 00:12:02.115 "compare_and_write": false, 00:12:02.115 "abort": true, 00:12:02.115 "seek_hole": false, 00:12:02.115 "seek_data": false, 00:12:02.115 "copy": true, 00:12:02.115 "nvme_iov_md": false 00:12:02.115 }, 00:12:02.115 "memory_domains": [ 00:12:02.115 { 00:12:02.115 "dma_device_id": "system", 00:12:02.115 "dma_device_type": 1 00:12:02.115 }, 00:12:02.115 { 00:12:02.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.115 "dma_device_type": 2 00:12:02.115 } 00:12:02.115 ], 00:12:02.115 "driver_specific": {} 00:12:02.115 }' 00:12:02.115 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:02.115 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:02.115 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:02.115 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:02.375 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:02.375 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:02.375 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.375 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.375 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:02.375 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.375 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.375 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:02.375 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:02.375 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:02.375 18:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:02.634 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:02.634 "name": "BaseBdev2", 00:12:02.634 "aliases": [ 00:12:02.634 "b2216bf6-8b7f-4dde-9fb2-3f8556708e16" 00:12:02.634 ], 00:12:02.634 "product_name": "Malloc disk", 00:12:02.634 "block_size": 512, 00:12:02.634 "num_blocks": 65536, 00:12:02.634 "uuid": "b2216bf6-8b7f-4dde-9fb2-3f8556708e16", 00:12:02.634 "assigned_rate_limits": { 00:12:02.634 "rw_ios_per_sec": 0, 00:12:02.634 "rw_mbytes_per_sec": 0, 00:12:02.634 "r_mbytes_per_sec": 0, 00:12:02.634 "w_mbytes_per_sec": 0 00:12:02.634 }, 00:12:02.634 "claimed": true, 00:12:02.634 "claim_type": "exclusive_write", 00:12:02.634 "zoned": false, 00:12:02.634 "supported_io_types": { 00:12:02.634 "read": true, 00:12:02.634 "write": true, 00:12:02.634 "unmap": true, 00:12:02.634 "flush": true, 00:12:02.634 "reset": true, 00:12:02.634 "nvme_admin": false, 00:12:02.634 "nvme_io": false, 00:12:02.634 "nvme_io_md": false, 00:12:02.634 "write_zeroes": true, 00:12:02.634 "zcopy": true, 00:12:02.634 "get_zone_info": false, 00:12:02.634 "zone_management": false, 00:12:02.634 "zone_append": false, 00:12:02.634 "compare": false, 00:12:02.634 "compare_and_write": false, 00:12:02.634 "abort": true, 00:12:02.634 "seek_hole": false, 00:12:02.634 "seek_data": false, 00:12:02.634 "copy": true, 00:12:02.634 "nvme_iov_md": false 00:12:02.634 }, 00:12:02.634 "memory_domains": [ 00:12:02.634 { 00:12:02.634 "dma_device_id": "system", 00:12:02.634 "dma_device_type": 1 00:12:02.634 }, 00:12:02.634 { 00:12:02.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.634 "dma_device_type": 2 00:12:02.634 } 00:12:02.634 ], 00:12:02.634 "driver_specific": {} 00:12:02.634 }' 00:12:02.634 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:02.634 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:02.634 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:02.634 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:02.634 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:02.892 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:02.892 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.892 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.892 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:02.892 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.892 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.892 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:02.892 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:02.892 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:02.892 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:03.152 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:03.152 "name": "BaseBdev3", 00:12:03.152 "aliases": [ 00:12:03.152 "557cc3fb-a5f8-4569-9d3f-5c818b780b98" 00:12:03.152 ], 00:12:03.152 "product_name": "Malloc disk", 00:12:03.152 "block_size": 512, 00:12:03.152 "num_blocks": 65536, 00:12:03.152 "uuid": "557cc3fb-a5f8-4569-9d3f-5c818b780b98", 00:12:03.152 "assigned_rate_limits": { 00:12:03.152 "rw_ios_per_sec": 0, 00:12:03.152 "rw_mbytes_per_sec": 0, 00:12:03.152 "r_mbytes_per_sec": 0, 00:12:03.152 "w_mbytes_per_sec": 0 00:12:03.152 }, 00:12:03.152 "claimed": true, 00:12:03.152 "claim_type": "exclusive_write", 00:12:03.152 "zoned": false, 00:12:03.152 "supported_io_types": { 00:12:03.152 "read": true, 00:12:03.152 "write": true, 00:12:03.152 "unmap": true, 00:12:03.152 "flush": true, 00:12:03.152 "reset": true, 00:12:03.152 "nvme_admin": false, 00:12:03.152 "nvme_io": false, 00:12:03.152 "nvme_io_md": false, 00:12:03.152 "write_zeroes": true, 00:12:03.152 "zcopy": true, 00:12:03.152 "get_zone_info": false, 00:12:03.152 "zone_management": false, 00:12:03.152 "zone_append": false, 00:12:03.152 "compare": false, 00:12:03.152 "compare_and_write": false, 00:12:03.152 "abort": true, 00:12:03.152 "seek_hole": false, 00:12:03.152 "seek_data": false, 00:12:03.152 "copy": true, 00:12:03.152 "nvme_iov_md": false 00:12:03.152 }, 00:12:03.152 "memory_domains": [ 00:12:03.152 { 00:12:03.152 "dma_device_id": "system", 00:12:03.152 "dma_device_type": 1 00:12:03.152 }, 00:12:03.152 { 00:12:03.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.152 "dma_device_type": 2 00:12:03.152 } 00:12:03.152 ], 00:12:03.152 "driver_specific": {} 00:12:03.152 }' 00:12:03.152 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:03.152 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:03.152 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:03.152 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:03.152 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:03.152 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:03.152 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:03.411 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:03.411 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:03.411 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:03.411 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:03.411 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:03.412 18:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:03.412 [2024-07-24 18:15:11.999038] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.412 [2024-07-24 18:15:11.999056] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.412 [2024-07-24 18:15:11.999083] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:03.671 "name": "Existed_Raid", 00:12:03.671 "uuid": "3787b9c3-fab3-4861-bcff-d359a3d6b081", 00:12:03.671 "strip_size_kb": 64, 00:12:03.671 "state": "offline", 00:12:03.671 "raid_level": "raid0", 00:12:03.671 "superblock": true, 00:12:03.671 "num_base_bdevs": 3, 00:12:03.671 "num_base_bdevs_discovered": 2, 00:12:03.671 "num_base_bdevs_operational": 2, 00:12:03.671 "base_bdevs_list": [ 00:12:03.671 { 00:12:03.671 "name": null, 00:12:03.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.671 "is_configured": false, 00:12:03.671 "data_offset": 2048, 00:12:03.671 "data_size": 63488 00:12:03.671 }, 00:12:03.671 { 00:12:03.671 "name": "BaseBdev2", 00:12:03.671 "uuid": "b2216bf6-8b7f-4dde-9fb2-3f8556708e16", 00:12:03.671 "is_configured": true, 00:12:03.671 "data_offset": 2048, 00:12:03.671 "data_size": 63488 00:12:03.671 }, 00:12:03.671 { 00:12:03.671 "name": "BaseBdev3", 00:12:03.671 "uuid": "557cc3fb-a5f8-4569-9d3f-5c818b780b98", 00:12:03.671 "is_configured": true, 00:12:03.671 "data_offset": 2048, 00:12:03.671 "data_size": 63488 00:12:03.671 } 00:12:03.671 ] 00:12:03.671 }' 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:03.671 18:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.239 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:04.239 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:04.239 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.239 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:04.498 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:04.498 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.498 18:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:04.498 [2024-07-24 18:15:13.022578] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:04.498 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:04.498 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:04.498 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.498 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:04.757 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:04.757 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.757 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:05.015 [2024-07-24 18:15:13.364991] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:05.015 [2024-07-24 18:15:13.365021] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1014980 name Existed_Raid, state offline 00:12:05.015 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:05.015 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:05.015 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:05.015 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.015 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:05.015 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:05.015 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:05.015 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:05.015 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:05.016 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.275 BaseBdev2 00:12:05.275 18:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:05.275 18:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:05.275 18:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:05.275 18:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:05.275 18:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:05.275 18:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:05.275 18:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:05.534 18:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.534 [ 00:12:05.534 { 00:12:05.534 "name": "BaseBdev2", 00:12:05.534 "aliases": [ 00:12:05.534 "d267548d-bc2b-4fa0-860d-45559abcab05" 00:12:05.534 ], 00:12:05.534 "product_name": "Malloc disk", 00:12:05.534 "block_size": 512, 00:12:05.534 "num_blocks": 65536, 00:12:05.534 "uuid": "d267548d-bc2b-4fa0-860d-45559abcab05", 00:12:05.534 "assigned_rate_limits": { 00:12:05.534 "rw_ios_per_sec": 0, 00:12:05.534 "rw_mbytes_per_sec": 0, 00:12:05.534 "r_mbytes_per_sec": 0, 00:12:05.534 "w_mbytes_per_sec": 0 00:12:05.535 }, 00:12:05.535 "claimed": false, 00:12:05.535 "zoned": false, 00:12:05.535 "supported_io_types": { 00:12:05.535 "read": true, 00:12:05.535 "write": true, 00:12:05.535 "unmap": true, 00:12:05.535 "flush": true, 00:12:05.535 "reset": true, 00:12:05.535 "nvme_admin": false, 00:12:05.535 "nvme_io": false, 00:12:05.535 "nvme_io_md": false, 00:12:05.535 "write_zeroes": true, 00:12:05.535 "zcopy": true, 00:12:05.535 "get_zone_info": false, 00:12:05.535 "zone_management": false, 00:12:05.535 "zone_append": false, 00:12:05.535 "compare": false, 00:12:05.535 "compare_and_write": false, 00:12:05.535 "abort": true, 00:12:05.535 "seek_hole": false, 00:12:05.535 "seek_data": false, 00:12:05.535 "copy": true, 00:12:05.535 "nvme_iov_md": false 00:12:05.535 }, 00:12:05.535 "memory_domains": [ 00:12:05.535 { 00:12:05.535 "dma_device_id": "system", 00:12:05.535 "dma_device_type": 1 00:12:05.535 }, 00:12:05.535 { 00:12:05.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.535 "dma_device_type": 2 00:12:05.535 } 00:12:05.535 ], 00:12:05.535 "driver_specific": {} 00:12:05.535 } 00:12:05.535 ] 00:12:05.535 18:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:05.535 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:05.535 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:05.535 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:05.794 BaseBdev3 00:12:05.794 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:05.794 18:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:05.794 18:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:05.794 18:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:05.794 18:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:05.794 18:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:05.794 18:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:06.054 18:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:06.054 [ 00:12:06.054 { 00:12:06.054 "name": "BaseBdev3", 00:12:06.054 "aliases": [ 00:12:06.054 "156fb68b-342f-43a0-ae31-429ace611ad6" 00:12:06.054 ], 00:12:06.054 "product_name": "Malloc disk", 00:12:06.054 "block_size": 512, 00:12:06.054 "num_blocks": 65536, 00:12:06.054 "uuid": "156fb68b-342f-43a0-ae31-429ace611ad6", 00:12:06.054 "assigned_rate_limits": { 00:12:06.054 "rw_ios_per_sec": 0, 00:12:06.054 "rw_mbytes_per_sec": 0, 00:12:06.054 "r_mbytes_per_sec": 0, 00:12:06.054 "w_mbytes_per_sec": 0 00:12:06.054 }, 00:12:06.054 "claimed": false, 00:12:06.054 "zoned": false, 00:12:06.054 "supported_io_types": { 00:12:06.054 "read": true, 00:12:06.054 "write": true, 00:12:06.054 "unmap": true, 00:12:06.054 "flush": true, 00:12:06.054 "reset": true, 00:12:06.054 "nvme_admin": false, 00:12:06.054 "nvme_io": false, 00:12:06.054 "nvme_io_md": false, 00:12:06.054 "write_zeroes": true, 00:12:06.054 "zcopy": true, 00:12:06.054 "get_zone_info": false, 00:12:06.054 "zone_management": false, 00:12:06.054 "zone_append": false, 00:12:06.054 "compare": false, 00:12:06.054 "compare_and_write": false, 00:12:06.054 "abort": true, 00:12:06.054 "seek_hole": false, 00:12:06.054 "seek_data": false, 00:12:06.054 "copy": true, 00:12:06.054 "nvme_iov_md": false 00:12:06.054 }, 00:12:06.054 "memory_domains": [ 00:12:06.054 { 00:12:06.054 "dma_device_id": "system", 00:12:06.054 "dma_device_type": 1 00:12:06.054 }, 00:12:06.054 { 00:12:06.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.054 "dma_device_type": 2 00:12:06.054 } 00:12:06.054 ], 00:12:06.054 "driver_specific": {} 00:12:06.054 } 00:12:06.054 ] 00:12:06.054 18:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:06.054 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:06.054 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:06.054 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:06.313 [2024-07-24 18:15:14.717606] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:06.313 [2024-07-24 18:15:14.717638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:06.313 [2024-07-24 18:15:14.717650] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.313 [2024-07-24 18:15:14.718524] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:06.313 "name": "Existed_Raid", 00:12:06.313 "uuid": "cb52c90f-7ddf-4206-bd7e-35fbcc6db544", 00:12:06.313 "strip_size_kb": 64, 00:12:06.313 "state": "configuring", 00:12:06.313 "raid_level": "raid0", 00:12:06.313 "superblock": true, 00:12:06.313 "num_base_bdevs": 3, 00:12:06.313 "num_base_bdevs_discovered": 2, 00:12:06.313 "num_base_bdevs_operational": 3, 00:12:06.313 "base_bdevs_list": [ 00:12:06.313 { 00:12:06.313 "name": "BaseBdev1", 00:12:06.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.313 "is_configured": false, 00:12:06.313 "data_offset": 0, 00:12:06.313 "data_size": 0 00:12:06.313 }, 00:12:06.313 { 00:12:06.313 "name": "BaseBdev2", 00:12:06.313 "uuid": "d267548d-bc2b-4fa0-860d-45559abcab05", 00:12:06.313 "is_configured": true, 00:12:06.313 "data_offset": 2048, 00:12:06.313 "data_size": 63488 00:12:06.313 }, 00:12:06.313 { 00:12:06.313 "name": "BaseBdev3", 00:12:06.313 "uuid": "156fb68b-342f-43a0-ae31-429ace611ad6", 00:12:06.313 "is_configured": true, 00:12:06.313 "data_offset": 2048, 00:12:06.313 "data_size": 63488 00:12:06.313 } 00:12:06.313 ] 00:12:06.313 }' 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:06.313 18:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.879 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:07.138 [2024-07-24 18:15:15.527673] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:07.138 "name": "Existed_Raid", 00:12:07.138 "uuid": "cb52c90f-7ddf-4206-bd7e-35fbcc6db544", 00:12:07.138 "strip_size_kb": 64, 00:12:07.138 "state": "configuring", 00:12:07.138 "raid_level": "raid0", 00:12:07.138 "superblock": true, 00:12:07.138 "num_base_bdevs": 3, 00:12:07.138 "num_base_bdevs_discovered": 1, 00:12:07.138 "num_base_bdevs_operational": 3, 00:12:07.138 "base_bdevs_list": [ 00:12:07.138 { 00:12:07.138 "name": "BaseBdev1", 00:12:07.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.138 "is_configured": false, 00:12:07.138 "data_offset": 0, 00:12:07.138 "data_size": 0 00:12:07.138 }, 00:12:07.138 { 00:12:07.138 "name": null, 00:12:07.138 "uuid": "d267548d-bc2b-4fa0-860d-45559abcab05", 00:12:07.138 "is_configured": false, 00:12:07.138 "data_offset": 2048, 00:12:07.138 "data_size": 63488 00:12:07.138 }, 00:12:07.138 { 00:12:07.138 "name": "BaseBdev3", 00:12:07.138 "uuid": "156fb68b-342f-43a0-ae31-429ace611ad6", 00:12:07.138 "is_configured": true, 00:12:07.138 "data_offset": 2048, 00:12:07.138 "data_size": 63488 00:12:07.138 } 00:12:07.138 ] 00:12:07.138 }' 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:07.138 18:15:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.707 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.707 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:07.966 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:07.966 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:07.966 [2024-07-24 18:15:16.537118] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.966 BaseBdev1 00:12:07.966 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:07.966 18:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:07.966 18:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:07.966 18:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:07.966 18:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:07.966 18:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:07.966 18:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:08.226 18:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:08.487 [ 00:12:08.487 { 00:12:08.487 "name": "BaseBdev1", 00:12:08.487 "aliases": [ 00:12:08.487 "43c17c78-ca88-46b3-9403-0c4bb8ca7200" 00:12:08.487 ], 00:12:08.487 "product_name": "Malloc disk", 00:12:08.487 "block_size": 512, 00:12:08.487 "num_blocks": 65536, 00:12:08.487 "uuid": "43c17c78-ca88-46b3-9403-0c4bb8ca7200", 00:12:08.487 "assigned_rate_limits": { 00:12:08.487 "rw_ios_per_sec": 0, 00:12:08.487 "rw_mbytes_per_sec": 0, 00:12:08.487 "r_mbytes_per_sec": 0, 00:12:08.487 "w_mbytes_per_sec": 0 00:12:08.487 }, 00:12:08.487 "claimed": true, 00:12:08.487 "claim_type": "exclusive_write", 00:12:08.487 "zoned": false, 00:12:08.487 "supported_io_types": { 00:12:08.487 "read": true, 00:12:08.487 "write": true, 00:12:08.487 "unmap": true, 00:12:08.487 "flush": true, 00:12:08.487 "reset": true, 00:12:08.487 "nvme_admin": false, 00:12:08.487 "nvme_io": false, 00:12:08.487 "nvme_io_md": false, 00:12:08.487 "write_zeroes": true, 00:12:08.487 "zcopy": true, 00:12:08.487 "get_zone_info": false, 00:12:08.487 "zone_management": false, 00:12:08.487 "zone_append": false, 00:12:08.487 "compare": false, 00:12:08.487 "compare_and_write": false, 00:12:08.487 "abort": true, 00:12:08.487 "seek_hole": false, 00:12:08.487 "seek_data": false, 00:12:08.487 "copy": true, 00:12:08.487 "nvme_iov_md": false 00:12:08.487 }, 00:12:08.487 "memory_domains": [ 00:12:08.487 { 00:12:08.487 "dma_device_id": "system", 00:12:08.487 "dma_device_type": 1 00:12:08.487 }, 00:12:08.487 { 00:12:08.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.487 "dma_device_type": 2 00:12:08.487 } 00:12:08.487 ], 00:12:08.487 "driver_specific": {} 00:12:08.487 } 00:12:08.487 ] 00:12:08.487 18:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:08.487 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:08.487 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:08.487 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:08.487 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:08.487 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:08.487 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:08.487 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:08.487 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:08.487 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:08.487 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:08.487 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.487 18:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.487 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:08.487 "name": "Existed_Raid", 00:12:08.487 "uuid": "cb52c90f-7ddf-4206-bd7e-35fbcc6db544", 00:12:08.487 "strip_size_kb": 64, 00:12:08.487 "state": "configuring", 00:12:08.487 "raid_level": "raid0", 00:12:08.487 "superblock": true, 00:12:08.487 "num_base_bdevs": 3, 00:12:08.487 "num_base_bdevs_discovered": 2, 00:12:08.487 "num_base_bdevs_operational": 3, 00:12:08.487 "base_bdevs_list": [ 00:12:08.487 { 00:12:08.487 "name": "BaseBdev1", 00:12:08.487 "uuid": "43c17c78-ca88-46b3-9403-0c4bb8ca7200", 00:12:08.487 "is_configured": true, 00:12:08.487 "data_offset": 2048, 00:12:08.487 "data_size": 63488 00:12:08.487 }, 00:12:08.487 { 00:12:08.487 "name": null, 00:12:08.487 "uuid": "d267548d-bc2b-4fa0-860d-45559abcab05", 00:12:08.487 "is_configured": false, 00:12:08.487 "data_offset": 2048, 00:12:08.487 "data_size": 63488 00:12:08.487 }, 00:12:08.487 { 00:12:08.487 "name": "BaseBdev3", 00:12:08.487 "uuid": "156fb68b-342f-43a0-ae31-429ace611ad6", 00:12:08.487 "is_configured": true, 00:12:08.487 "data_offset": 2048, 00:12:08.487 "data_size": 63488 00:12:08.487 } 00:12:08.487 ] 00:12:08.487 }' 00:12:08.487 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:08.487 18:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.054 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.054 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:09.313 [2024-07-24 18:15:17.844516] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.313 18:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.572 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:09.572 "name": "Existed_Raid", 00:12:09.572 "uuid": "cb52c90f-7ddf-4206-bd7e-35fbcc6db544", 00:12:09.572 "strip_size_kb": 64, 00:12:09.572 "state": "configuring", 00:12:09.572 "raid_level": "raid0", 00:12:09.572 "superblock": true, 00:12:09.572 "num_base_bdevs": 3, 00:12:09.572 "num_base_bdevs_discovered": 1, 00:12:09.572 "num_base_bdevs_operational": 3, 00:12:09.572 "base_bdevs_list": [ 00:12:09.572 { 00:12:09.572 "name": "BaseBdev1", 00:12:09.572 "uuid": "43c17c78-ca88-46b3-9403-0c4bb8ca7200", 00:12:09.572 "is_configured": true, 00:12:09.572 "data_offset": 2048, 00:12:09.572 "data_size": 63488 00:12:09.572 }, 00:12:09.572 { 00:12:09.572 "name": null, 00:12:09.572 "uuid": "d267548d-bc2b-4fa0-860d-45559abcab05", 00:12:09.572 "is_configured": false, 00:12:09.572 "data_offset": 2048, 00:12:09.572 "data_size": 63488 00:12:09.572 }, 00:12:09.572 { 00:12:09.572 "name": null, 00:12:09.572 "uuid": "156fb68b-342f-43a0-ae31-429ace611ad6", 00:12:09.572 "is_configured": false, 00:12:09.572 "data_offset": 2048, 00:12:09.572 "data_size": 63488 00:12:09.572 } 00:12:09.572 ] 00:12:09.572 }' 00:12:09.572 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:09.572 18:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.139 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.139 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:10.139 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:10.139 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:10.398 [2024-07-24 18:15:18.811014] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.398 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:10.398 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:10.398 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:10.398 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:10.398 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:10.398 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:10.398 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.398 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.398 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.398 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.398 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.398 18:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.657 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:10.657 "name": "Existed_Raid", 00:12:10.657 "uuid": "cb52c90f-7ddf-4206-bd7e-35fbcc6db544", 00:12:10.657 "strip_size_kb": 64, 00:12:10.657 "state": "configuring", 00:12:10.657 "raid_level": "raid0", 00:12:10.657 "superblock": true, 00:12:10.657 "num_base_bdevs": 3, 00:12:10.657 "num_base_bdevs_discovered": 2, 00:12:10.657 "num_base_bdevs_operational": 3, 00:12:10.657 "base_bdevs_list": [ 00:12:10.657 { 00:12:10.657 "name": "BaseBdev1", 00:12:10.657 "uuid": "43c17c78-ca88-46b3-9403-0c4bb8ca7200", 00:12:10.657 "is_configured": true, 00:12:10.657 "data_offset": 2048, 00:12:10.657 "data_size": 63488 00:12:10.657 }, 00:12:10.657 { 00:12:10.657 "name": null, 00:12:10.657 "uuid": "d267548d-bc2b-4fa0-860d-45559abcab05", 00:12:10.657 "is_configured": false, 00:12:10.657 "data_offset": 2048, 00:12:10.657 "data_size": 63488 00:12:10.657 }, 00:12:10.657 { 00:12:10.657 "name": "BaseBdev3", 00:12:10.657 "uuid": "156fb68b-342f-43a0-ae31-429ace611ad6", 00:12:10.657 "is_configured": true, 00:12:10.657 "data_offset": 2048, 00:12:10.657 "data_size": 63488 00:12:10.657 } 00:12:10.657 ] 00:12:10.657 }' 00:12:10.657 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:10.657 18:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.915 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.915 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:11.173 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:11.173 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:11.173 [2024-07-24 18:15:19.745440] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.173 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:11.431 "name": "Existed_Raid", 00:12:11.431 "uuid": "cb52c90f-7ddf-4206-bd7e-35fbcc6db544", 00:12:11.431 "strip_size_kb": 64, 00:12:11.431 "state": "configuring", 00:12:11.431 "raid_level": "raid0", 00:12:11.431 "superblock": true, 00:12:11.431 "num_base_bdevs": 3, 00:12:11.431 "num_base_bdevs_discovered": 1, 00:12:11.431 "num_base_bdevs_operational": 3, 00:12:11.431 "base_bdevs_list": [ 00:12:11.431 { 00:12:11.431 "name": null, 00:12:11.431 "uuid": "43c17c78-ca88-46b3-9403-0c4bb8ca7200", 00:12:11.431 "is_configured": false, 00:12:11.431 "data_offset": 2048, 00:12:11.431 "data_size": 63488 00:12:11.431 }, 00:12:11.431 { 00:12:11.431 "name": null, 00:12:11.431 "uuid": "d267548d-bc2b-4fa0-860d-45559abcab05", 00:12:11.431 "is_configured": false, 00:12:11.431 "data_offset": 2048, 00:12:11.431 "data_size": 63488 00:12:11.431 }, 00:12:11.431 { 00:12:11.431 "name": "BaseBdev3", 00:12:11.431 "uuid": "156fb68b-342f-43a0-ae31-429ace611ad6", 00:12:11.431 "is_configured": true, 00:12:11.431 "data_offset": 2048, 00:12:11.431 "data_size": 63488 00:12:11.431 } 00:12:11.431 ] 00:12:11.431 }' 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:11.431 18:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.997 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.997 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:12.256 [2024-07-24 18:15:20.773781] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.256 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.513 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:12.513 "name": "Existed_Raid", 00:12:12.513 "uuid": "cb52c90f-7ddf-4206-bd7e-35fbcc6db544", 00:12:12.513 "strip_size_kb": 64, 00:12:12.513 "state": "configuring", 00:12:12.513 "raid_level": "raid0", 00:12:12.513 "superblock": true, 00:12:12.513 "num_base_bdevs": 3, 00:12:12.513 "num_base_bdevs_discovered": 2, 00:12:12.513 "num_base_bdevs_operational": 3, 00:12:12.513 "base_bdevs_list": [ 00:12:12.513 { 00:12:12.513 "name": null, 00:12:12.513 "uuid": "43c17c78-ca88-46b3-9403-0c4bb8ca7200", 00:12:12.513 "is_configured": false, 00:12:12.513 "data_offset": 2048, 00:12:12.513 "data_size": 63488 00:12:12.513 }, 00:12:12.513 { 00:12:12.513 "name": "BaseBdev2", 00:12:12.513 "uuid": "d267548d-bc2b-4fa0-860d-45559abcab05", 00:12:12.513 "is_configured": true, 00:12:12.513 "data_offset": 2048, 00:12:12.513 "data_size": 63488 00:12:12.513 }, 00:12:12.513 { 00:12:12.513 "name": "BaseBdev3", 00:12:12.513 "uuid": "156fb68b-342f-43a0-ae31-429ace611ad6", 00:12:12.513 "is_configured": true, 00:12:12.513 "data_offset": 2048, 00:12:12.513 "data_size": 63488 00:12:12.513 } 00:12:12.513 ] 00:12:12.513 }' 00:12:12.513 18:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:12.513 18:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.080 18:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.080 18:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:13.080 18:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:13.080 18:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.080 18:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:13.339 18:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 43c17c78-ca88-46b3-9403-0c4bb8ca7200 00:12:13.339 [2024-07-24 18:15:21.883457] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:13.339 [2024-07-24 18:15:21.883573] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x11b8660 00:12:13.339 [2024-07-24 18:15:21.883582] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:13.339 [2024-07-24 18:15:21.883703] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11c2410 00:12:13.339 [2024-07-24 18:15:21.883776] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11b8660 00:12:13.339 [2024-07-24 18:15:21.883783] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x11b8660 00:12:13.339 [2024-07-24 18:15:21.883843] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.339 NewBaseBdev 00:12:13.339 18:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:13.339 18:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:13.339 18:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:13.339 18:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:13.339 18:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:13.339 18:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:13.339 18:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:13.597 18:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:13.856 [ 00:12:13.856 { 00:12:13.856 "name": "NewBaseBdev", 00:12:13.856 "aliases": [ 00:12:13.856 "43c17c78-ca88-46b3-9403-0c4bb8ca7200" 00:12:13.856 ], 00:12:13.856 "product_name": "Malloc disk", 00:12:13.856 "block_size": 512, 00:12:13.856 "num_blocks": 65536, 00:12:13.856 "uuid": "43c17c78-ca88-46b3-9403-0c4bb8ca7200", 00:12:13.856 "assigned_rate_limits": { 00:12:13.856 "rw_ios_per_sec": 0, 00:12:13.856 "rw_mbytes_per_sec": 0, 00:12:13.856 "r_mbytes_per_sec": 0, 00:12:13.856 "w_mbytes_per_sec": 0 00:12:13.856 }, 00:12:13.856 "claimed": true, 00:12:13.856 "claim_type": "exclusive_write", 00:12:13.856 "zoned": false, 00:12:13.856 "supported_io_types": { 00:12:13.856 "read": true, 00:12:13.856 "write": true, 00:12:13.856 "unmap": true, 00:12:13.856 "flush": true, 00:12:13.856 "reset": true, 00:12:13.856 "nvme_admin": false, 00:12:13.856 "nvme_io": false, 00:12:13.856 "nvme_io_md": false, 00:12:13.856 "write_zeroes": true, 00:12:13.856 "zcopy": true, 00:12:13.856 "get_zone_info": false, 00:12:13.856 "zone_management": false, 00:12:13.856 "zone_append": false, 00:12:13.856 "compare": false, 00:12:13.856 "compare_and_write": false, 00:12:13.856 "abort": true, 00:12:13.856 "seek_hole": false, 00:12:13.856 "seek_data": false, 00:12:13.856 "copy": true, 00:12:13.856 "nvme_iov_md": false 00:12:13.856 }, 00:12:13.856 "memory_domains": [ 00:12:13.856 { 00:12:13.856 "dma_device_id": "system", 00:12:13.856 "dma_device_type": 1 00:12:13.856 }, 00:12:13.856 { 00:12:13.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.856 "dma_device_type": 2 00:12:13.856 } 00:12:13.856 ], 00:12:13.856 "driver_specific": {} 00:12:13.856 } 00:12:13.856 ] 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.856 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:13.856 "name": "Existed_Raid", 00:12:13.856 "uuid": "cb52c90f-7ddf-4206-bd7e-35fbcc6db544", 00:12:13.856 "strip_size_kb": 64, 00:12:13.856 "state": "online", 00:12:13.856 "raid_level": "raid0", 00:12:13.856 "superblock": true, 00:12:13.856 "num_base_bdevs": 3, 00:12:13.856 "num_base_bdevs_discovered": 3, 00:12:13.856 "num_base_bdevs_operational": 3, 00:12:13.856 "base_bdevs_list": [ 00:12:13.856 { 00:12:13.856 "name": "NewBaseBdev", 00:12:13.856 "uuid": "43c17c78-ca88-46b3-9403-0c4bb8ca7200", 00:12:13.856 "is_configured": true, 00:12:13.856 "data_offset": 2048, 00:12:13.856 "data_size": 63488 00:12:13.856 }, 00:12:13.856 { 00:12:13.857 "name": "BaseBdev2", 00:12:13.857 "uuid": "d267548d-bc2b-4fa0-860d-45559abcab05", 00:12:13.857 "is_configured": true, 00:12:13.857 "data_offset": 2048, 00:12:13.857 "data_size": 63488 00:12:13.857 }, 00:12:13.857 { 00:12:13.857 "name": "BaseBdev3", 00:12:13.857 "uuid": "156fb68b-342f-43a0-ae31-429ace611ad6", 00:12:13.857 "is_configured": true, 00:12:13.857 "data_offset": 2048, 00:12:13.857 "data_size": 63488 00:12:13.857 } 00:12:13.857 ] 00:12:13.857 }' 00:12:13.857 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:13.857 18:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.424 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:14.424 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:14.424 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:14.424 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:14.424 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:14.424 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:14.424 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:14.424 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:14.424 [2024-07-24 18:15:22.966440] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.424 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:14.424 "name": "Existed_Raid", 00:12:14.424 "aliases": [ 00:12:14.424 "cb52c90f-7ddf-4206-bd7e-35fbcc6db544" 00:12:14.424 ], 00:12:14.424 "product_name": "Raid Volume", 00:12:14.424 "block_size": 512, 00:12:14.424 "num_blocks": 190464, 00:12:14.424 "uuid": "cb52c90f-7ddf-4206-bd7e-35fbcc6db544", 00:12:14.424 "assigned_rate_limits": { 00:12:14.424 "rw_ios_per_sec": 0, 00:12:14.424 "rw_mbytes_per_sec": 0, 00:12:14.424 "r_mbytes_per_sec": 0, 00:12:14.424 "w_mbytes_per_sec": 0 00:12:14.424 }, 00:12:14.424 "claimed": false, 00:12:14.424 "zoned": false, 00:12:14.424 "supported_io_types": { 00:12:14.424 "read": true, 00:12:14.424 "write": true, 00:12:14.424 "unmap": true, 00:12:14.424 "flush": true, 00:12:14.424 "reset": true, 00:12:14.424 "nvme_admin": false, 00:12:14.424 "nvme_io": false, 00:12:14.424 "nvme_io_md": false, 00:12:14.424 "write_zeroes": true, 00:12:14.424 "zcopy": false, 00:12:14.424 "get_zone_info": false, 00:12:14.424 "zone_management": false, 00:12:14.424 "zone_append": false, 00:12:14.424 "compare": false, 00:12:14.424 "compare_and_write": false, 00:12:14.424 "abort": false, 00:12:14.424 "seek_hole": false, 00:12:14.424 "seek_data": false, 00:12:14.424 "copy": false, 00:12:14.424 "nvme_iov_md": false 00:12:14.424 }, 00:12:14.424 "memory_domains": [ 00:12:14.424 { 00:12:14.424 "dma_device_id": "system", 00:12:14.424 "dma_device_type": 1 00:12:14.424 }, 00:12:14.424 { 00:12:14.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.424 "dma_device_type": 2 00:12:14.424 }, 00:12:14.424 { 00:12:14.424 "dma_device_id": "system", 00:12:14.424 "dma_device_type": 1 00:12:14.424 }, 00:12:14.424 { 00:12:14.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.424 "dma_device_type": 2 00:12:14.424 }, 00:12:14.424 { 00:12:14.424 "dma_device_id": "system", 00:12:14.424 "dma_device_type": 1 00:12:14.424 }, 00:12:14.424 { 00:12:14.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.424 "dma_device_type": 2 00:12:14.424 } 00:12:14.424 ], 00:12:14.424 "driver_specific": { 00:12:14.424 "raid": { 00:12:14.424 "uuid": "cb52c90f-7ddf-4206-bd7e-35fbcc6db544", 00:12:14.424 "strip_size_kb": 64, 00:12:14.424 "state": "online", 00:12:14.424 "raid_level": "raid0", 00:12:14.424 "superblock": true, 00:12:14.424 "num_base_bdevs": 3, 00:12:14.425 "num_base_bdevs_discovered": 3, 00:12:14.425 "num_base_bdevs_operational": 3, 00:12:14.425 "base_bdevs_list": [ 00:12:14.425 { 00:12:14.425 "name": "NewBaseBdev", 00:12:14.425 "uuid": "43c17c78-ca88-46b3-9403-0c4bb8ca7200", 00:12:14.425 "is_configured": true, 00:12:14.425 "data_offset": 2048, 00:12:14.425 "data_size": 63488 00:12:14.425 }, 00:12:14.425 { 00:12:14.425 "name": "BaseBdev2", 00:12:14.425 "uuid": "d267548d-bc2b-4fa0-860d-45559abcab05", 00:12:14.425 "is_configured": true, 00:12:14.425 "data_offset": 2048, 00:12:14.425 "data_size": 63488 00:12:14.425 }, 00:12:14.425 { 00:12:14.425 "name": "BaseBdev3", 00:12:14.425 "uuid": "156fb68b-342f-43a0-ae31-429ace611ad6", 00:12:14.425 "is_configured": true, 00:12:14.425 "data_offset": 2048, 00:12:14.425 "data_size": 63488 00:12:14.425 } 00:12:14.425 ] 00:12:14.425 } 00:12:14.425 } 00:12:14.425 }' 00:12:14.425 18:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.425 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:14.425 BaseBdev2 00:12:14.425 BaseBdev3' 00:12:14.425 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:14.425 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:14.425 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:14.683 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:14.683 "name": "NewBaseBdev", 00:12:14.683 "aliases": [ 00:12:14.683 "43c17c78-ca88-46b3-9403-0c4bb8ca7200" 00:12:14.683 ], 00:12:14.683 "product_name": "Malloc disk", 00:12:14.683 "block_size": 512, 00:12:14.683 "num_blocks": 65536, 00:12:14.683 "uuid": "43c17c78-ca88-46b3-9403-0c4bb8ca7200", 00:12:14.683 "assigned_rate_limits": { 00:12:14.683 "rw_ios_per_sec": 0, 00:12:14.683 "rw_mbytes_per_sec": 0, 00:12:14.683 "r_mbytes_per_sec": 0, 00:12:14.683 "w_mbytes_per_sec": 0 00:12:14.683 }, 00:12:14.683 "claimed": true, 00:12:14.683 "claim_type": "exclusive_write", 00:12:14.683 "zoned": false, 00:12:14.683 "supported_io_types": { 00:12:14.683 "read": true, 00:12:14.683 "write": true, 00:12:14.683 "unmap": true, 00:12:14.683 "flush": true, 00:12:14.683 "reset": true, 00:12:14.683 "nvme_admin": false, 00:12:14.684 "nvme_io": false, 00:12:14.684 "nvme_io_md": false, 00:12:14.684 "write_zeroes": true, 00:12:14.684 "zcopy": true, 00:12:14.684 "get_zone_info": false, 00:12:14.684 "zone_management": false, 00:12:14.684 "zone_append": false, 00:12:14.684 "compare": false, 00:12:14.684 "compare_and_write": false, 00:12:14.684 "abort": true, 00:12:14.684 "seek_hole": false, 00:12:14.684 "seek_data": false, 00:12:14.684 "copy": true, 00:12:14.684 "nvme_iov_md": false 00:12:14.684 }, 00:12:14.684 "memory_domains": [ 00:12:14.684 { 00:12:14.684 "dma_device_id": "system", 00:12:14.684 "dma_device_type": 1 00:12:14.684 }, 00:12:14.684 { 00:12:14.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.684 "dma_device_type": 2 00:12:14.684 } 00:12:14.684 ], 00:12:14.684 "driver_specific": {} 00:12:14.684 }' 00:12:14.684 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.684 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.684 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:14.684 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.684 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.942 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:14.942 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.942 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.942 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:14.942 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.942 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.942 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:14.942 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:14.942 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:14.942 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:15.201 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:15.201 "name": "BaseBdev2", 00:12:15.201 "aliases": [ 00:12:15.201 "d267548d-bc2b-4fa0-860d-45559abcab05" 00:12:15.201 ], 00:12:15.201 "product_name": "Malloc disk", 00:12:15.201 "block_size": 512, 00:12:15.201 "num_blocks": 65536, 00:12:15.201 "uuid": "d267548d-bc2b-4fa0-860d-45559abcab05", 00:12:15.201 "assigned_rate_limits": { 00:12:15.201 "rw_ios_per_sec": 0, 00:12:15.201 "rw_mbytes_per_sec": 0, 00:12:15.201 "r_mbytes_per_sec": 0, 00:12:15.201 "w_mbytes_per_sec": 0 00:12:15.201 }, 00:12:15.201 "claimed": true, 00:12:15.201 "claim_type": "exclusive_write", 00:12:15.201 "zoned": false, 00:12:15.201 "supported_io_types": { 00:12:15.201 "read": true, 00:12:15.201 "write": true, 00:12:15.201 "unmap": true, 00:12:15.201 "flush": true, 00:12:15.201 "reset": true, 00:12:15.201 "nvme_admin": false, 00:12:15.201 "nvme_io": false, 00:12:15.201 "nvme_io_md": false, 00:12:15.201 "write_zeroes": true, 00:12:15.201 "zcopy": true, 00:12:15.201 "get_zone_info": false, 00:12:15.201 "zone_management": false, 00:12:15.201 "zone_append": false, 00:12:15.201 "compare": false, 00:12:15.201 "compare_and_write": false, 00:12:15.201 "abort": true, 00:12:15.201 "seek_hole": false, 00:12:15.201 "seek_data": false, 00:12:15.201 "copy": true, 00:12:15.201 "nvme_iov_md": false 00:12:15.201 }, 00:12:15.201 "memory_domains": [ 00:12:15.201 { 00:12:15.201 "dma_device_id": "system", 00:12:15.201 "dma_device_type": 1 00:12:15.201 }, 00:12:15.201 { 00:12:15.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.201 "dma_device_type": 2 00:12:15.201 } 00:12:15.201 ], 00:12:15.201 "driver_specific": {} 00:12:15.201 }' 00:12:15.201 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.201 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.201 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:15.201 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.201 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.201 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:15.201 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.201 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.201 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:15.201 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.201 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.460 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:15.460 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:15.460 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:15.460 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:15.460 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:15.460 "name": "BaseBdev3", 00:12:15.460 "aliases": [ 00:12:15.460 "156fb68b-342f-43a0-ae31-429ace611ad6" 00:12:15.460 ], 00:12:15.460 "product_name": "Malloc disk", 00:12:15.460 "block_size": 512, 00:12:15.460 "num_blocks": 65536, 00:12:15.460 "uuid": "156fb68b-342f-43a0-ae31-429ace611ad6", 00:12:15.460 "assigned_rate_limits": { 00:12:15.460 "rw_ios_per_sec": 0, 00:12:15.460 "rw_mbytes_per_sec": 0, 00:12:15.460 "r_mbytes_per_sec": 0, 00:12:15.460 "w_mbytes_per_sec": 0 00:12:15.460 }, 00:12:15.460 "claimed": true, 00:12:15.460 "claim_type": "exclusive_write", 00:12:15.460 "zoned": false, 00:12:15.460 "supported_io_types": { 00:12:15.460 "read": true, 00:12:15.460 "write": true, 00:12:15.460 "unmap": true, 00:12:15.460 "flush": true, 00:12:15.460 "reset": true, 00:12:15.460 "nvme_admin": false, 00:12:15.460 "nvme_io": false, 00:12:15.460 "nvme_io_md": false, 00:12:15.460 "write_zeroes": true, 00:12:15.460 "zcopy": true, 00:12:15.460 "get_zone_info": false, 00:12:15.460 "zone_management": false, 00:12:15.460 "zone_append": false, 00:12:15.460 "compare": false, 00:12:15.460 "compare_and_write": false, 00:12:15.460 "abort": true, 00:12:15.460 "seek_hole": false, 00:12:15.460 "seek_data": false, 00:12:15.460 "copy": true, 00:12:15.460 "nvme_iov_md": false 00:12:15.460 }, 00:12:15.460 "memory_domains": [ 00:12:15.460 { 00:12:15.460 "dma_device_id": "system", 00:12:15.460 "dma_device_type": 1 00:12:15.460 }, 00:12:15.460 { 00:12:15.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.460 "dma_device_type": 2 00:12:15.460 } 00:12:15.460 ], 00:12:15.460 "driver_specific": {} 00:12:15.460 }' 00:12:15.461 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.461 18:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.461 18:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:15.461 18:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.720 18:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.720 18:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:15.720 18:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.720 18:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.720 18:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:15.720 18:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.720 18:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.720 18:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:15.720 18:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:15.980 [2024-07-24 18:15:24.430039] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:15.980 [2024-07-24 18:15:24.430057] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.980 [2024-07-24 18:15:24.430095] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.980 [2024-07-24 18:15:24.430131] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.980 [2024-07-24 18:15:24.430139] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11b8660 name Existed_Raid, state offline 00:12:15.980 18:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2175770 00:12:15.980 18:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2175770 ']' 00:12:15.980 18:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2175770 00:12:15.980 18:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:15.980 18:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:15.980 18:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2175770 00:12:15.980 18:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:15.980 18:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:15.980 18:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2175770' 00:12:15.980 killing process with pid 2175770 00:12:15.980 18:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2175770 00:12:15.980 [2024-07-24 18:15:24.493933] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.980 18:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2175770 00:12:15.980 [2024-07-24 18:15:24.516403] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:16.239 18:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:16.239 00:12:16.239 real 0m21.015s 00:12:16.239 user 0m38.381s 00:12:16.239 sys 0m3.981s 00:12:16.239 18:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.239 18:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.239 ************************************ 00:12:16.239 END TEST raid_state_function_test_sb 00:12:16.239 ************************************ 00:12:16.239 18:15:24 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:12:16.240 18:15:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:16.240 18:15:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.240 18:15:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.240 ************************************ 00:12:16.240 START TEST raid_superblock_test 00:12:16.240 ************************************ 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2180529 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2180529 /var/tmp/spdk-raid.sock 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2180529 ']' 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:16.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.240 18:15:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:16.240 [2024-07-24 18:15:24.813838] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:12:16.240 [2024-07-24 18:15:24.813880] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180529 ] 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:01.0 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:01.1 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:01.2 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:01.3 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:01.4 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:01.5 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:01.6 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:01.7 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:02.0 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:02.1 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:02.2 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:02.3 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:02.4 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:02.5 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:02.6 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b3:02.7 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:01.0 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:01.1 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:01.2 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:01.3 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:01.4 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:01.5 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:01.6 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:01.7 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:02.0 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:02.1 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:02.2 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:02.3 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:02.4 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:02.5 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:02.6 cannot be used 00:12:16.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:16.500 EAL: Requested device 0000:b5:02.7 cannot be used 00:12:16.500 [2024-07-24 18:15:24.906266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.500 [2024-07-24 18:15:24.979822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.500 [2024-07-24 18:15:25.029994] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.500 [2024-07-24 18:15:25.030019] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.069 18:15:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:17.069 18:15:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:17.069 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:17.069 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:17.069 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:17.069 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:17.069 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:17.069 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:17.069 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:17.069 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:17.069 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:17.328 malloc1 00:12:17.328 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:17.328 [2024-07-24 18:15:25.901797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:17.328 [2024-07-24 18:15:25.901832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.328 [2024-07-24 18:15:25.901846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22a3cb0 00:12:17.328 [2024-07-24 18:15:25.901855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.328 [2024-07-24 18:15:25.902962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.328 [2024-07-24 18:15:25.902986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:17.328 pt1 00:12:17.328 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:17.328 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:17.328 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:17.328 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:17.328 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:17.328 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:17.328 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:17.328 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:17.328 18:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:17.587 malloc2 00:12:17.587 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:17.846 [2024-07-24 18:15:26.222281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:17.846 [2024-07-24 18:15:26.222314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.846 [2024-07-24 18:15:26.222325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22a50b0 00:12:17.846 [2024-07-24 18:15:26.222337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.846 [2024-07-24 18:15:26.223344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.846 [2024-07-24 18:15:26.223367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:17.846 pt2 00:12:17.846 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:17.846 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:17.846 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:12:17.846 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:12:17.846 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:17.846 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:17.846 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:17.846 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:17.846 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:17.846 malloc3 00:12:17.846 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:18.105 [2024-07-24 18:15:26.542787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:18.105 [2024-07-24 18:15:26.542819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.105 [2024-07-24 18:15:26.542830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x243ba80 00:12:18.105 [2024-07-24 18:15:26.542838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.105 [2024-07-24 18:15:26.543849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.105 [2024-07-24 18:15:26.543871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:18.105 pt3 00:12:18.105 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:18.105 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:18.105 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:18.105 [2024-07-24 18:15:26.699210] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:18.105 [2024-07-24 18:15:26.700064] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:18.105 [2024-07-24 18:15:26.700103] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:18.105 [2024-07-24 18:15:26.700203] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x229c5e0 00:12:18.105 [2024-07-24 18:15:26.700210] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:18.105 [2024-07-24 18:15:26.700343] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22a3980 00:12:18.105 [2024-07-24 18:15:26.700438] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x229c5e0 00:12:18.105 [2024-07-24 18:15:26.700445] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x229c5e0 00:12:18.105 [2024-07-24 18:15:26.700507] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.364 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:18.364 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:18.364 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:18.364 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:18.364 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:18.364 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:18.364 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:18.364 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:18.364 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:18.364 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:18.364 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.364 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.364 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:18.364 "name": "raid_bdev1", 00:12:18.364 "uuid": "3f335857-c175-4524-bf87-ae6db32bff4e", 00:12:18.364 "strip_size_kb": 64, 00:12:18.364 "state": "online", 00:12:18.364 "raid_level": "raid0", 00:12:18.364 "superblock": true, 00:12:18.365 "num_base_bdevs": 3, 00:12:18.365 "num_base_bdevs_discovered": 3, 00:12:18.365 "num_base_bdevs_operational": 3, 00:12:18.365 "base_bdevs_list": [ 00:12:18.365 { 00:12:18.365 "name": "pt1", 00:12:18.365 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:18.365 "is_configured": true, 00:12:18.365 "data_offset": 2048, 00:12:18.365 "data_size": 63488 00:12:18.365 }, 00:12:18.365 { 00:12:18.365 "name": "pt2", 00:12:18.365 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.365 "is_configured": true, 00:12:18.365 "data_offset": 2048, 00:12:18.365 "data_size": 63488 00:12:18.365 }, 00:12:18.365 { 00:12:18.365 "name": "pt3", 00:12:18.365 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.365 "is_configured": true, 00:12:18.365 "data_offset": 2048, 00:12:18.365 "data_size": 63488 00:12:18.365 } 00:12:18.365 ] 00:12:18.365 }' 00:12:18.365 18:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:18.365 18:15:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.933 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:18.933 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:18.933 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:18.933 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:18.933 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:18.933 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:18.933 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:18.933 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:18.933 [2024-07-24 18:15:27.517489] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.192 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:19.192 "name": "raid_bdev1", 00:12:19.192 "aliases": [ 00:12:19.192 "3f335857-c175-4524-bf87-ae6db32bff4e" 00:12:19.192 ], 00:12:19.192 "product_name": "Raid Volume", 00:12:19.192 "block_size": 512, 00:12:19.192 "num_blocks": 190464, 00:12:19.192 "uuid": "3f335857-c175-4524-bf87-ae6db32bff4e", 00:12:19.192 "assigned_rate_limits": { 00:12:19.192 "rw_ios_per_sec": 0, 00:12:19.192 "rw_mbytes_per_sec": 0, 00:12:19.192 "r_mbytes_per_sec": 0, 00:12:19.192 "w_mbytes_per_sec": 0 00:12:19.192 }, 00:12:19.192 "claimed": false, 00:12:19.192 "zoned": false, 00:12:19.192 "supported_io_types": { 00:12:19.192 "read": true, 00:12:19.192 "write": true, 00:12:19.192 "unmap": true, 00:12:19.192 "flush": true, 00:12:19.192 "reset": true, 00:12:19.192 "nvme_admin": false, 00:12:19.192 "nvme_io": false, 00:12:19.192 "nvme_io_md": false, 00:12:19.192 "write_zeroes": true, 00:12:19.192 "zcopy": false, 00:12:19.192 "get_zone_info": false, 00:12:19.192 "zone_management": false, 00:12:19.192 "zone_append": false, 00:12:19.192 "compare": false, 00:12:19.192 "compare_and_write": false, 00:12:19.192 "abort": false, 00:12:19.192 "seek_hole": false, 00:12:19.192 "seek_data": false, 00:12:19.192 "copy": false, 00:12:19.192 "nvme_iov_md": false 00:12:19.192 }, 00:12:19.192 "memory_domains": [ 00:12:19.192 { 00:12:19.192 "dma_device_id": "system", 00:12:19.192 "dma_device_type": 1 00:12:19.192 }, 00:12:19.192 { 00:12:19.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.192 "dma_device_type": 2 00:12:19.192 }, 00:12:19.192 { 00:12:19.192 "dma_device_id": "system", 00:12:19.192 "dma_device_type": 1 00:12:19.192 }, 00:12:19.192 { 00:12:19.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.192 "dma_device_type": 2 00:12:19.192 }, 00:12:19.192 { 00:12:19.192 "dma_device_id": "system", 00:12:19.192 "dma_device_type": 1 00:12:19.192 }, 00:12:19.192 { 00:12:19.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.192 "dma_device_type": 2 00:12:19.192 } 00:12:19.192 ], 00:12:19.192 "driver_specific": { 00:12:19.192 "raid": { 00:12:19.192 "uuid": "3f335857-c175-4524-bf87-ae6db32bff4e", 00:12:19.192 "strip_size_kb": 64, 00:12:19.192 "state": "online", 00:12:19.192 "raid_level": "raid0", 00:12:19.192 "superblock": true, 00:12:19.192 "num_base_bdevs": 3, 00:12:19.192 "num_base_bdevs_discovered": 3, 00:12:19.192 "num_base_bdevs_operational": 3, 00:12:19.192 "base_bdevs_list": [ 00:12:19.192 { 00:12:19.192 "name": "pt1", 00:12:19.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.193 "is_configured": true, 00:12:19.193 "data_offset": 2048, 00:12:19.193 "data_size": 63488 00:12:19.193 }, 00:12:19.193 { 00:12:19.193 "name": "pt2", 00:12:19.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.193 "is_configured": true, 00:12:19.193 "data_offset": 2048, 00:12:19.193 "data_size": 63488 00:12:19.193 }, 00:12:19.193 { 00:12:19.193 "name": "pt3", 00:12:19.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.193 "is_configured": true, 00:12:19.193 "data_offset": 2048, 00:12:19.193 "data_size": 63488 00:12:19.193 } 00:12:19.193 ] 00:12:19.193 } 00:12:19.193 } 00:12:19.193 }' 00:12:19.193 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:19.193 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:19.193 pt2 00:12:19.193 pt3' 00:12:19.193 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:19.193 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:19.193 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:19.193 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:19.193 "name": "pt1", 00:12:19.193 "aliases": [ 00:12:19.193 "00000000-0000-0000-0000-000000000001" 00:12:19.193 ], 00:12:19.193 "product_name": "passthru", 00:12:19.193 "block_size": 512, 00:12:19.193 "num_blocks": 65536, 00:12:19.193 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.193 "assigned_rate_limits": { 00:12:19.193 "rw_ios_per_sec": 0, 00:12:19.193 "rw_mbytes_per_sec": 0, 00:12:19.193 "r_mbytes_per_sec": 0, 00:12:19.193 "w_mbytes_per_sec": 0 00:12:19.193 }, 00:12:19.193 "claimed": true, 00:12:19.193 "claim_type": "exclusive_write", 00:12:19.193 "zoned": false, 00:12:19.193 "supported_io_types": { 00:12:19.193 "read": true, 00:12:19.193 "write": true, 00:12:19.193 "unmap": true, 00:12:19.193 "flush": true, 00:12:19.193 "reset": true, 00:12:19.193 "nvme_admin": false, 00:12:19.193 "nvme_io": false, 00:12:19.193 "nvme_io_md": false, 00:12:19.193 "write_zeroes": true, 00:12:19.193 "zcopy": true, 00:12:19.193 "get_zone_info": false, 00:12:19.193 "zone_management": false, 00:12:19.193 "zone_append": false, 00:12:19.193 "compare": false, 00:12:19.193 "compare_and_write": false, 00:12:19.193 "abort": true, 00:12:19.193 "seek_hole": false, 00:12:19.193 "seek_data": false, 00:12:19.193 "copy": true, 00:12:19.193 "nvme_iov_md": false 00:12:19.193 }, 00:12:19.193 "memory_domains": [ 00:12:19.193 { 00:12:19.193 "dma_device_id": "system", 00:12:19.193 "dma_device_type": 1 00:12:19.193 }, 00:12:19.193 { 00:12:19.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.193 "dma_device_type": 2 00:12:19.193 } 00:12:19.193 ], 00:12:19.193 "driver_specific": { 00:12:19.193 "passthru": { 00:12:19.193 "name": "pt1", 00:12:19.193 "base_bdev_name": "malloc1" 00:12:19.193 } 00:12:19.193 } 00:12:19.193 }' 00:12:19.193 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:19.193 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:19.452 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:19.452 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:19.452 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:19.452 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:19.452 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:19.452 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:19.452 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:19.452 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:19.452 18:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:19.452 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:19.452 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:19.452 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:19.452 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:19.712 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:19.712 "name": "pt2", 00:12:19.712 "aliases": [ 00:12:19.712 "00000000-0000-0000-0000-000000000002" 00:12:19.712 ], 00:12:19.712 "product_name": "passthru", 00:12:19.712 "block_size": 512, 00:12:19.712 "num_blocks": 65536, 00:12:19.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.712 "assigned_rate_limits": { 00:12:19.712 "rw_ios_per_sec": 0, 00:12:19.712 "rw_mbytes_per_sec": 0, 00:12:19.712 "r_mbytes_per_sec": 0, 00:12:19.712 "w_mbytes_per_sec": 0 00:12:19.712 }, 00:12:19.712 "claimed": true, 00:12:19.712 "claim_type": "exclusive_write", 00:12:19.712 "zoned": false, 00:12:19.712 "supported_io_types": { 00:12:19.712 "read": true, 00:12:19.712 "write": true, 00:12:19.712 "unmap": true, 00:12:19.712 "flush": true, 00:12:19.712 "reset": true, 00:12:19.712 "nvme_admin": false, 00:12:19.712 "nvme_io": false, 00:12:19.712 "nvme_io_md": false, 00:12:19.712 "write_zeroes": true, 00:12:19.712 "zcopy": true, 00:12:19.712 "get_zone_info": false, 00:12:19.712 "zone_management": false, 00:12:19.712 "zone_append": false, 00:12:19.712 "compare": false, 00:12:19.712 "compare_and_write": false, 00:12:19.712 "abort": true, 00:12:19.712 "seek_hole": false, 00:12:19.712 "seek_data": false, 00:12:19.712 "copy": true, 00:12:19.712 "nvme_iov_md": false 00:12:19.712 }, 00:12:19.712 "memory_domains": [ 00:12:19.712 { 00:12:19.712 "dma_device_id": "system", 00:12:19.712 "dma_device_type": 1 00:12:19.712 }, 00:12:19.712 { 00:12:19.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.712 "dma_device_type": 2 00:12:19.712 } 00:12:19.712 ], 00:12:19.712 "driver_specific": { 00:12:19.712 "passthru": { 00:12:19.712 "name": "pt2", 00:12:19.712 "base_bdev_name": "malloc2" 00:12:19.712 } 00:12:19.712 } 00:12:19.712 }' 00:12:19.712 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:19.712 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:19.712 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:19.712 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:20.035 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:20.035 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:20.035 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:20.035 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:20.035 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:20.035 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:20.035 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:20.035 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:20.035 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:20.035 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:20.035 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:20.329 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:20.329 "name": "pt3", 00:12:20.329 "aliases": [ 00:12:20.329 "00000000-0000-0000-0000-000000000003" 00:12:20.329 ], 00:12:20.329 "product_name": "passthru", 00:12:20.329 "block_size": 512, 00:12:20.329 "num_blocks": 65536, 00:12:20.329 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.329 "assigned_rate_limits": { 00:12:20.329 "rw_ios_per_sec": 0, 00:12:20.329 "rw_mbytes_per_sec": 0, 00:12:20.329 "r_mbytes_per_sec": 0, 00:12:20.329 "w_mbytes_per_sec": 0 00:12:20.329 }, 00:12:20.329 "claimed": true, 00:12:20.329 "claim_type": "exclusive_write", 00:12:20.329 "zoned": false, 00:12:20.329 "supported_io_types": { 00:12:20.329 "read": true, 00:12:20.329 "write": true, 00:12:20.329 "unmap": true, 00:12:20.329 "flush": true, 00:12:20.329 "reset": true, 00:12:20.329 "nvme_admin": false, 00:12:20.329 "nvme_io": false, 00:12:20.329 "nvme_io_md": false, 00:12:20.329 "write_zeroes": true, 00:12:20.329 "zcopy": true, 00:12:20.329 "get_zone_info": false, 00:12:20.329 "zone_management": false, 00:12:20.329 "zone_append": false, 00:12:20.329 "compare": false, 00:12:20.329 "compare_and_write": false, 00:12:20.329 "abort": true, 00:12:20.329 "seek_hole": false, 00:12:20.329 "seek_data": false, 00:12:20.329 "copy": true, 00:12:20.329 "nvme_iov_md": false 00:12:20.329 }, 00:12:20.329 "memory_domains": [ 00:12:20.329 { 00:12:20.329 "dma_device_id": "system", 00:12:20.329 "dma_device_type": 1 00:12:20.329 }, 00:12:20.329 { 00:12:20.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.329 "dma_device_type": 2 00:12:20.329 } 00:12:20.329 ], 00:12:20.329 "driver_specific": { 00:12:20.329 "passthru": { 00:12:20.329 "name": "pt3", 00:12:20.329 "base_bdev_name": "malloc3" 00:12:20.329 } 00:12:20.329 } 00:12:20.329 }' 00:12:20.329 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:20.329 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:20.329 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:20.329 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:20.329 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:20.329 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:20.329 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:20.329 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:20.329 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:20.329 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:20.588 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:20.588 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:20.588 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:20.588 18:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:20.588 [2024-07-24 18:15:29.113595] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.588 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3f335857-c175-4524-bf87-ae6db32bff4e 00:12:20.588 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 3f335857-c175-4524-bf87-ae6db32bff4e ']' 00:12:20.588 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:20.848 [2024-07-24 18:15:29.285853] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:20.848 [2024-07-24 18:15:29.285867] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.848 [2024-07-24 18:15:29.285902] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.848 [2024-07-24 18:15:29.285938] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.848 [2024-07-24 18:15:29.285945] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x229c5e0 name raid_bdev1, state offline 00:12:20.848 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.848 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:21.107 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:21.107 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:21.107 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:21.107 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:21.107 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:21.107 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:21.366 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:21.366 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:21.625 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:21.625 18:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:21.625 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:21.625 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:21.625 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:21.625 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:21.625 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:21.625 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.625 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:21.625 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.625 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:21.625 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.625 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:21.625 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:21.625 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:21.884 [2024-07-24 18:15:30.304456] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:21.884 [2024-07-24 18:15:30.305397] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:21.884 [2024-07-24 18:15:30.305428] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:21.884 [2024-07-24 18:15:30.305460] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:21.884 [2024-07-24 18:15:30.305486] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:21.884 [2024-07-24 18:15:30.305501] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:21.884 [2024-07-24 18:15:30.305513] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:21.884 [2024-07-24 18:15:30.305519] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2447730 name raid_bdev1, state configuring 00:12:21.884 request: 00:12:21.884 { 00:12:21.884 "name": "raid_bdev1", 00:12:21.884 "raid_level": "raid0", 00:12:21.884 "base_bdevs": [ 00:12:21.884 "malloc1", 00:12:21.884 "malloc2", 00:12:21.884 "malloc3" 00:12:21.884 ], 00:12:21.884 "strip_size_kb": 64, 00:12:21.885 "superblock": false, 00:12:21.885 "method": "bdev_raid_create", 00:12:21.885 "req_id": 1 00:12:21.885 } 00:12:21.885 Got JSON-RPC error response 00:12:21.885 response: 00:12:21.885 { 00:12:21.885 "code": -17, 00:12:21.885 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:21.885 } 00:12:21.885 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:21.885 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:21.885 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:21.885 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:21.885 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.885 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:22.144 [2024-07-24 18:15:30.649307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:22.144 [2024-07-24 18:15:30.649332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.144 [2024-07-24 18:15:30.649346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22a3ee0 00:12:22.144 [2024-07-24 18:15:30.649354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.144 [2024-07-24 18:15:30.650465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.144 [2024-07-24 18:15:30.650488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:22.144 [2024-07-24 18:15:30.650531] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:22.144 [2024-07-24 18:15:30.650549] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:22.144 pt1 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:22.144 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.403 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:22.403 "name": "raid_bdev1", 00:12:22.403 "uuid": "3f335857-c175-4524-bf87-ae6db32bff4e", 00:12:22.403 "strip_size_kb": 64, 00:12:22.403 "state": "configuring", 00:12:22.403 "raid_level": "raid0", 00:12:22.403 "superblock": true, 00:12:22.403 "num_base_bdevs": 3, 00:12:22.403 "num_base_bdevs_discovered": 1, 00:12:22.403 "num_base_bdevs_operational": 3, 00:12:22.403 "base_bdevs_list": [ 00:12:22.403 { 00:12:22.403 "name": "pt1", 00:12:22.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.403 "is_configured": true, 00:12:22.403 "data_offset": 2048, 00:12:22.403 "data_size": 63488 00:12:22.403 }, 00:12:22.403 { 00:12:22.403 "name": null, 00:12:22.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.403 "is_configured": false, 00:12:22.403 "data_offset": 2048, 00:12:22.403 "data_size": 63488 00:12:22.403 }, 00:12:22.403 { 00:12:22.403 "name": null, 00:12:22.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.403 "is_configured": false, 00:12:22.403 "data_offset": 2048, 00:12:22.403 "data_size": 63488 00:12:22.403 } 00:12:22.403 ] 00:12:22.403 }' 00:12:22.403 18:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:22.403 18:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.971 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:12:22.972 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:22.972 [2024-07-24 18:15:31.475435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:22.972 [2024-07-24 18:15:31.475469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.972 [2024-07-24 18:15:31.475480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x229b3b0 00:12:22.972 [2024-07-24 18:15:31.475489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.972 [2024-07-24 18:15:31.475752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.972 [2024-07-24 18:15:31.475765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:22.972 [2024-07-24 18:15:31.475809] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:22.972 [2024-07-24 18:15:31.475822] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:22.972 pt2 00:12:22.972 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:23.231 [2024-07-24 18:15:31.647883] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:23.231 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:23.231 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:23.231 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:23.231 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:23.231 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:23.231 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:23.231 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.231 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.231 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.231 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.231 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.231 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.500 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:23.500 "name": "raid_bdev1", 00:12:23.500 "uuid": "3f335857-c175-4524-bf87-ae6db32bff4e", 00:12:23.500 "strip_size_kb": 64, 00:12:23.500 "state": "configuring", 00:12:23.500 "raid_level": "raid0", 00:12:23.500 "superblock": true, 00:12:23.500 "num_base_bdevs": 3, 00:12:23.500 "num_base_bdevs_discovered": 1, 00:12:23.500 "num_base_bdevs_operational": 3, 00:12:23.500 "base_bdevs_list": [ 00:12:23.500 { 00:12:23.500 "name": "pt1", 00:12:23.500 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.500 "is_configured": true, 00:12:23.500 "data_offset": 2048, 00:12:23.500 "data_size": 63488 00:12:23.500 }, 00:12:23.500 { 00:12:23.500 "name": null, 00:12:23.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.500 "is_configured": false, 00:12:23.500 "data_offset": 2048, 00:12:23.500 "data_size": 63488 00:12:23.500 }, 00:12:23.500 { 00:12:23.500 "name": null, 00:12:23.500 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.500 "is_configured": false, 00:12:23.500 "data_offset": 2048, 00:12:23.500 "data_size": 63488 00:12:23.500 } 00:12:23.500 ] 00:12:23.500 }' 00:12:23.500 18:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:23.500 18:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.761 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:23.761 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:23.761 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.020 [2024-07-24 18:15:32.461969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:24.020 [2024-07-24 18:15:32.462004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.020 [2024-07-24 18:15:32.462021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x243c6e0 00:12:24.020 [2024-07-24 18:15:32.462030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.020 [2024-07-24 18:15:32.462264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.020 [2024-07-24 18:15:32.462276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:24.020 [2024-07-24 18:15:32.462318] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:24.020 [2024-07-24 18:15:32.462331] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.020 pt2 00:12:24.020 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:24.020 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:24.020 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:24.279 [2024-07-24 18:15:32.630411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:24.279 [2024-07-24 18:15:32.630438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.279 [2024-07-24 18:15:32.630451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x243d270 00:12:24.279 [2024-07-24 18:15:32.630459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.279 [2024-07-24 18:15:32.630684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.279 [2024-07-24 18:15:32.630697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:24.279 [2024-07-24 18:15:32.630734] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:24.279 [2024-07-24 18:15:32.630747] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:24.279 [2024-07-24 18:15:32.630820] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x243e340 00:12:24.279 [2024-07-24 18:15:32.630827] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:24.279 [2024-07-24 18:15:32.630939] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24470f0 00:12:24.279 [2024-07-24 18:15:32.631022] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x243e340 00:12:24.279 [2024-07-24 18:15:32.631029] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x243e340 00:12:24.279 [2024-07-24 18:15:32.631093] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.279 pt3 00:12:24.279 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:24.279 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:24.279 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:24.279 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:24.279 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:24.279 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:24.279 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:24.279 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:24.279 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:24.279 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:24.279 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:24.279 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:24.280 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.280 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.280 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.280 "name": "raid_bdev1", 00:12:24.280 "uuid": "3f335857-c175-4524-bf87-ae6db32bff4e", 00:12:24.280 "strip_size_kb": 64, 00:12:24.280 "state": "online", 00:12:24.280 "raid_level": "raid0", 00:12:24.280 "superblock": true, 00:12:24.280 "num_base_bdevs": 3, 00:12:24.280 "num_base_bdevs_discovered": 3, 00:12:24.280 "num_base_bdevs_operational": 3, 00:12:24.280 "base_bdevs_list": [ 00:12:24.280 { 00:12:24.280 "name": "pt1", 00:12:24.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.280 "is_configured": true, 00:12:24.280 "data_offset": 2048, 00:12:24.280 "data_size": 63488 00:12:24.280 }, 00:12:24.280 { 00:12:24.280 "name": "pt2", 00:12:24.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.280 "is_configured": true, 00:12:24.280 "data_offset": 2048, 00:12:24.280 "data_size": 63488 00:12:24.280 }, 00:12:24.280 { 00:12:24.280 "name": "pt3", 00:12:24.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.280 "is_configured": true, 00:12:24.280 "data_offset": 2048, 00:12:24.280 "data_size": 63488 00:12:24.280 } 00:12:24.280 ] 00:12:24.280 }' 00:12:24.280 18:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.280 18:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.848 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:24.848 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:24.848 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:24.848 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:24.848 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:24.848 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:24.848 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:24.848 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:25.110 [2024-07-24 18:15:33.456760] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.110 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:25.110 "name": "raid_bdev1", 00:12:25.110 "aliases": [ 00:12:25.110 "3f335857-c175-4524-bf87-ae6db32bff4e" 00:12:25.110 ], 00:12:25.110 "product_name": "Raid Volume", 00:12:25.110 "block_size": 512, 00:12:25.110 "num_blocks": 190464, 00:12:25.110 "uuid": "3f335857-c175-4524-bf87-ae6db32bff4e", 00:12:25.110 "assigned_rate_limits": { 00:12:25.110 "rw_ios_per_sec": 0, 00:12:25.110 "rw_mbytes_per_sec": 0, 00:12:25.110 "r_mbytes_per_sec": 0, 00:12:25.110 "w_mbytes_per_sec": 0 00:12:25.110 }, 00:12:25.110 "claimed": false, 00:12:25.110 "zoned": false, 00:12:25.110 "supported_io_types": { 00:12:25.110 "read": true, 00:12:25.110 "write": true, 00:12:25.110 "unmap": true, 00:12:25.110 "flush": true, 00:12:25.110 "reset": true, 00:12:25.110 "nvme_admin": false, 00:12:25.110 "nvme_io": false, 00:12:25.110 "nvme_io_md": false, 00:12:25.110 "write_zeroes": true, 00:12:25.110 "zcopy": false, 00:12:25.110 "get_zone_info": false, 00:12:25.110 "zone_management": false, 00:12:25.110 "zone_append": false, 00:12:25.110 "compare": false, 00:12:25.110 "compare_and_write": false, 00:12:25.110 "abort": false, 00:12:25.110 "seek_hole": false, 00:12:25.110 "seek_data": false, 00:12:25.110 "copy": false, 00:12:25.110 "nvme_iov_md": false 00:12:25.110 }, 00:12:25.110 "memory_domains": [ 00:12:25.110 { 00:12:25.110 "dma_device_id": "system", 00:12:25.110 "dma_device_type": 1 00:12:25.110 }, 00:12:25.110 { 00:12:25.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.110 "dma_device_type": 2 00:12:25.110 }, 00:12:25.110 { 00:12:25.110 "dma_device_id": "system", 00:12:25.110 "dma_device_type": 1 00:12:25.110 }, 00:12:25.110 { 00:12:25.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.110 "dma_device_type": 2 00:12:25.111 }, 00:12:25.111 { 00:12:25.111 "dma_device_id": "system", 00:12:25.111 "dma_device_type": 1 00:12:25.111 }, 00:12:25.111 { 00:12:25.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.111 "dma_device_type": 2 00:12:25.111 } 00:12:25.111 ], 00:12:25.111 "driver_specific": { 00:12:25.111 "raid": { 00:12:25.111 "uuid": "3f335857-c175-4524-bf87-ae6db32bff4e", 00:12:25.111 "strip_size_kb": 64, 00:12:25.111 "state": "online", 00:12:25.111 "raid_level": "raid0", 00:12:25.111 "superblock": true, 00:12:25.111 "num_base_bdevs": 3, 00:12:25.111 "num_base_bdevs_discovered": 3, 00:12:25.111 "num_base_bdevs_operational": 3, 00:12:25.111 "base_bdevs_list": [ 00:12:25.111 { 00:12:25.111 "name": "pt1", 00:12:25.111 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.111 "is_configured": true, 00:12:25.111 "data_offset": 2048, 00:12:25.111 "data_size": 63488 00:12:25.111 }, 00:12:25.111 { 00:12:25.111 "name": "pt2", 00:12:25.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.111 "is_configured": true, 00:12:25.111 "data_offset": 2048, 00:12:25.111 "data_size": 63488 00:12:25.111 }, 00:12:25.111 { 00:12:25.111 "name": "pt3", 00:12:25.111 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.111 "is_configured": true, 00:12:25.111 "data_offset": 2048, 00:12:25.111 "data_size": 63488 00:12:25.111 } 00:12:25.111 ] 00:12:25.111 } 00:12:25.111 } 00:12:25.111 }' 00:12:25.111 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.111 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:25.111 pt2 00:12:25.111 pt3' 00:12:25.111 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:25.111 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:25.111 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:25.111 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:25.111 "name": "pt1", 00:12:25.111 "aliases": [ 00:12:25.111 "00000000-0000-0000-0000-000000000001" 00:12:25.111 ], 00:12:25.111 "product_name": "passthru", 00:12:25.111 "block_size": 512, 00:12:25.111 "num_blocks": 65536, 00:12:25.111 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.111 "assigned_rate_limits": { 00:12:25.111 "rw_ios_per_sec": 0, 00:12:25.111 "rw_mbytes_per_sec": 0, 00:12:25.111 "r_mbytes_per_sec": 0, 00:12:25.111 "w_mbytes_per_sec": 0 00:12:25.111 }, 00:12:25.111 "claimed": true, 00:12:25.111 "claim_type": "exclusive_write", 00:12:25.111 "zoned": false, 00:12:25.111 "supported_io_types": { 00:12:25.111 "read": true, 00:12:25.111 "write": true, 00:12:25.111 "unmap": true, 00:12:25.111 "flush": true, 00:12:25.111 "reset": true, 00:12:25.111 "nvme_admin": false, 00:12:25.111 "nvme_io": false, 00:12:25.111 "nvme_io_md": false, 00:12:25.111 "write_zeroes": true, 00:12:25.111 "zcopy": true, 00:12:25.111 "get_zone_info": false, 00:12:25.111 "zone_management": false, 00:12:25.111 "zone_append": false, 00:12:25.111 "compare": false, 00:12:25.111 "compare_and_write": false, 00:12:25.111 "abort": true, 00:12:25.111 "seek_hole": false, 00:12:25.111 "seek_data": false, 00:12:25.111 "copy": true, 00:12:25.111 "nvme_iov_md": false 00:12:25.111 }, 00:12:25.111 "memory_domains": [ 00:12:25.111 { 00:12:25.111 "dma_device_id": "system", 00:12:25.111 "dma_device_type": 1 00:12:25.111 }, 00:12:25.111 { 00:12:25.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.111 "dma_device_type": 2 00:12:25.111 } 00:12:25.111 ], 00:12:25.111 "driver_specific": { 00:12:25.111 "passthru": { 00:12:25.111 "name": "pt1", 00:12:25.111 "base_bdev_name": "malloc1" 00:12:25.111 } 00:12:25.111 } 00:12:25.111 }' 00:12:25.111 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:25.370 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:25.371 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:25.371 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:25.371 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:25.371 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:25.371 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:25.371 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:25.371 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:25.371 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:25.630 18:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:25.630 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:25.630 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:25.630 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:25.630 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:25.630 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:25.630 "name": "pt2", 00:12:25.630 "aliases": [ 00:12:25.630 "00000000-0000-0000-0000-000000000002" 00:12:25.630 ], 00:12:25.630 "product_name": "passthru", 00:12:25.630 "block_size": 512, 00:12:25.630 "num_blocks": 65536, 00:12:25.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.630 "assigned_rate_limits": { 00:12:25.630 "rw_ios_per_sec": 0, 00:12:25.630 "rw_mbytes_per_sec": 0, 00:12:25.630 "r_mbytes_per_sec": 0, 00:12:25.630 "w_mbytes_per_sec": 0 00:12:25.630 }, 00:12:25.630 "claimed": true, 00:12:25.630 "claim_type": "exclusive_write", 00:12:25.630 "zoned": false, 00:12:25.630 "supported_io_types": { 00:12:25.630 "read": true, 00:12:25.630 "write": true, 00:12:25.630 "unmap": true, 00:12:25.630 "flush": true, 00:12:25.630 "reset": true, 00:12:25.630 "nvme_admin": false, 00:12:25.630 "nvme_io": false, 00:12:25.630 "nvme_io_md": false, 00:12:25.630 "write_zeroes": true, 00:12:25.630 "zcopy": true, 00:12:25.630 "get_zone_info": false, 00:12:25.630 "zone_management": false, 00:12:25.630 "zone_append": false, 00:12:25.630 "compare": false, 00:12:25.630 "compare_and_write": false, 00:12:25.630 "abort": true, 00:12:25.630 "seek_hole": false, 00:12:25.630 "seek_data": false, 00:12:25.630 "copy": true, 00:12:25.630 "nvme_iov_md": false 00:12:25.630 }, 00:12:25.630 "memory_domains": [ 00:12:25.630 { 00:12:25.630 "dma_device_id": "system", 00:12:25.630 "dma_device_type": 1 00:12:25.630 }, 00:12:25.630 { 00:12:25.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.630 "dma_device_type": 2 00:12:25.630 } 00:12:25.630 ], 00:12:25.630 "driver_specific": { 00:12:25.630 "passthru": { 00:12:25.630 "name": "pt2", 00:12:25.630 "base_bdev_name": "malloc2" 00:12:25.630 } 00:12:25.630 } 00:12:25.630 }' 00:12:25.630 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:25.889 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:25.889 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:25.889 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:25.889 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:25.889 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:25.889 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:25.889 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:25.889 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:25.889 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:25.889 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:26.149 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:26.149 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:26.149 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:26.149 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:26.149 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:26.149 "name": "pt3", 00:12:26.149 "aliases": [ 00:12:26.149 "00000000-0000-0000-0000-000000000003" 00:12:26.149 ], 00:12:26.149 "product_name": "passthru", 00:12:26.149 "block_size": 512, 00:12:26.149 "num_blocks": 65536, 00:12:26.149 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.149 "assigned_rate_limits": { 00:12:26.149 "rw_ios_per_sec": 0, 00:12:26.149 "rw_mbytes_per_sec": 0, 00:12:26.149 "r_mbytes_per_sec": 0, 00:12:26.149 "w_mbytes_per_sec": 0 00:12:26.149 }, 00:12:26.149 "claimed": true, 00:12:26.149 "claim_type": "exclusive_write", 00:12:26.149 "zoned": false, 00:12:26.149 "supported_io_types": { 00:12:26.149 "read": true, 00:12:26.149 "write": true, 00:12:26.149 "unmap": true, 00:12:26.149 "flush": true, 00:12:26.149 "reset": true, 00:12:26.149 "nvme_admin": false, 00:12:26.149 "nvme_io": false, 00:12:26.149 "nvme_io_md": false, 00:12:26.149 "write_zeroes": true, 00:12:26.149 "zcopy": true, 00:12:26.149 "get_zone_info": false, 00:12:26.149 "zone_management": false, 00:12:26.149 "zone_append": false, 00:12:26.149 "compare": false, 00:12:26.149 "compare_and_write": false, 00:12:26.149 "abort": true, 00:12:26.149 "seek_hole": false, 00:12:26.149 "seek_data": false, 00:12:26.149 "copy": true, 00:12:26.149 "nvme_iov_md": false 00:12:26.149 }, 00:12:26.149 "memory_domains": [ 00:12:26.149 { 00:12:26.149 "dma_device_id": "system", 00:12:26.149 "dma_device_type": 1 00:12:26.149 }, 00:12:26.149 { 00:12:26.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.149 "dma_device_type": 2 00:12:26.149 } 00:12:26.149 ], 00:12:26.149 "driver_specific": { 00:12:26.149 "passthru": { 00:12:26.149 "name": "pt3", 00:12:26.149 "base_bdev_name": "malloc3" 00:12:26.149 } 00:12:26.149 } 00:12:26.149 }' 00:12:26.149 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:26.149 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:26.408 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:26.408 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:26.408 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:26.408 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:26.408 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:26.408 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:26.408 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:26.408 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:26.408 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:26.408 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:26.408 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:26.408 18:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:26.668 [2024-07-24 18:15:35.117037] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 3f335857-c175-4524-bf87-ae6db32bff4e '!=' 3f335857-c175-4524-bf87-ae6db32bff4e ']' 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2180529 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2180529 ']' 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2180529 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2180529 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2180529' 00:12:26.668 killing process with pid 2180529 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2180529 00:12:26.668 [2024-07-24 18:15:35.191874] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:26.668 [2024-07-24 18:15:35.191915] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.668 [2024-07-24 18:15:35.191954] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.668 [2024-07-24 18:15:35.191961] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x243e340 name raid_bdev1, state offline 00:12:26.668 18:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2180529 00:12:26.668 [2024-07-24 18:15:35.214531] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.927 18:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:26.927 00:12:26.927 real 0m10.621s 00:12:26.927 user 0m19.003s 00:12:26.927 sys 0m2.054s 00:12:26.927 18:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.927 18:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.927 ************************************ 00:12:26.927 END TEST raid_superblock_test 00:12:26.927 ************************************ 00:12:26.927 18:15:35 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:12:26.927 18:15:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:26.927 18:15:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.927 18:15:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:26.927 ************************************ 00:12:26.927 START TEST raid_read_error_test 00:12:26.927 ************************************ 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:26.927 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:26.928 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:26.928 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:26.928 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:26.928 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.yk8ihq02X0 00:12:26.928 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2182686 00:12:26.928 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2182686 /var/tmp/spdk-raid.sock 00:12:26.928 18:15:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:26.928 18:15:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2182686 ']' 00:12:26.928 18:15:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:26.928 18:15:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:26.928 18:15:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:26.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:26.928 18:15:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:26.928 18:15:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.187 [2024-07-24 18:15:35.537848] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:12:27.187 [2024-07-24 18:15:35.537892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182686 ] 00:12:27.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.187 EAL: Requested device 0000:b3:01.0 cannot be used 00:12:27.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.187 EAL: Requested device 0000:b3:01.1 cannot be used 00:12:27.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.187 EAL: Requested device 0000:b3:01.2 cannot be used 00:12:27.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.187 EAL: Requested device 0000:b3:01.3 cannot be used 00:12:27.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.187 EAL: Requested device 0000:b3:01.4 cannot be used 00:12:27.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.187 EAL: Requested device 0000:b3:01.5 cannot be used 00:12:27.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.187 EAL: Requested device 0000:b3:01.6 cannot be used 00:12:27.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.187 EAL: Requested device 0000:b3:01.7 cannot be used 00:12:27.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b3:02.0 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b3:02.1 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b3:02.2 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b3:02.3 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b3:02.4 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b3:02.5 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b3:02.6 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b3:02.7 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:01.0 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:01.1 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:01.2 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:01.3 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:01.4 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:01.5 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:01.6 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:01.7 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:02.0 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:02.1 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:02.2 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:02.3 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:02.4 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:02.5 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:02.6 cannot be used 00:12:27.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:27.188 EAL: Requested device 0000:b5:02.7 cannot be used 00:12:27.188 [2024-07-24 18:15:35.629944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.188 [2024-07-24 18:15:35.702027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.188 [2024-07-24 18:15:35.755324] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.188 [2024-07-24 18:15:35.755351] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.757 18:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:27.757 18:15:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:27.757 18:15:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:27.757 18:15:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:28.016 BaseBdev1_malloc 00:12:28.016 18:15:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:28.276 true 00:12:28.276 18:15:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:28.276 [2024-07-24 18:15:36.815418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:28.276 [2024-07-24 18:15:36.815450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.276 [2024-07-24 18:15:36.815462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cfbed0 00:12:28.276 [2024-07-24 18:15:36.815470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.276 [2024-07-24 18:15:36.816608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.276 [2024-07-24 18:15:36.816640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:28.276 BaseBdev1 00:12:28.276 18:15:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:28.276 18:15:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:28.535 BaseBdev2_malloc 00:12:28.535 18:15:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:28.795 true 00:12:28.795 18:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:28.795 [2024-07-24 18:15:37.320227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:28.795 [2024-07-24 18:15:37.320261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.795 [2024-07-24 18:15:37.320273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d00b60 00:12:28.795 [2024-07-24 18:15:37.320281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.795 [2024-07-24 18:15:37.321267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.795 [2024-07-24 18:15:37.321290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:28.795 BaseBdev2 00:12:28.795 18:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:28.795 18:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:29.054 BaseBdev3_malloc 00:12:29.054 18:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:29.313 true 00:12:29.313 18:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:29.313 [2024-07-24 18:15:37.829176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:29.313 [2024-07-24 18:15:37.829209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.313 [2024-07-24 18:15:37.829225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d01ad0 00:12:29.313 [2024-07-24 18:15:37.829233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.313 [2024-07-24 18:15:37.830242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.313 [2024-07-24 18:15:37.830269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:29.313 BaseBdev3 00:12:29.313 18:15:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:29.572 [2024-07-24 18:15:37.993634] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.572 [2024-07-24 18:15:37.994471] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:29.572 [2024-07-24 18:15:37.994519] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:29.572 [2024-07-24 18:15:37.994663] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d038e0 00:12:29.572 [2024-07-24 18:15:37.994671] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:29.572 [2024-07-24 18:15:37.994795] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b57870 00:12:29.572 [2024-07-24 18:15:37.994894] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d038e0 00:12:29.573 [2024-07-24 18:15:37.994900] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d038e0 00:12:29.573 [2024-07-24 18:15:37.994965] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.573 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:29.573 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:29.573 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:29.573 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:29.573 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:29.573 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:29.573 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:29.573 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:29.573 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:29.573 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:29.573 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.573 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.832 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:29.832 "name": "raid_bdev1", 00:12:29.832 "uuid": "f8e7b2e7-1f24-436a-9bcc-dc9e98e343b7", 00:12:29.832 "strip_size_kb": 64, 00:12:29.832 "state": "online", 00:12:29.832 "raid_level": "raid0", 00:12:29.832 "superblock": true, 00:12:29.832 "num_base_bdevs": 3, 00:12:29.832 "num_base_bdevs_discovered": 3, 00:12:29.832 "num_base_bdevs_operational": 3, 00:12:29.832 "base_bdevs_list": [ 00:12:29.832 { 00:12:29.832 "name": "BaseBdev1", 00:12:29.832 "uuid": "af22c4f6-ffc9-568c-8bbc-10cebda2c86a", 00:12:29.832 "is_configured": true, 00:12:29.832 "data_offset": 2048, 00:12:29.832 "data_size": 63488 00:12:29.832 }, 00:12:29.832 { 00:12:29.832 "name": "BaseBdev2", 00:12:29.833 "uuid": "925c7ae9-cb54-5c40-a258-cec9c9c85e48", 00:12:29.833 "is_configured": true, 00:12:29.833 "data_offset": 2048, 00:12:29.833 "data_size": 63488 00:12:29.833 }, 00:12:29.833 { 00:12:29.833 "name": "BaseBdev3", 00:12:29.833 "uuid": "56e23fb6-02d6-5091-b7bb-cf5638b46b28", 00:12:29.833 "is_configured": true, 00:12:29.833 "data_offset": 2048, 00:12:29.833 "data_size": 63488 00:12:29.833 } 00:12:29.833 ] 00:12:29.833 }' 00:12:29.833 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:29.833 18:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.091 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:30.091 18:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:30.350 [2024-07-24 18:15:38.743769] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1854b30 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.287 18:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.547 18:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:31.547 "name": "raid_bdev1", 00:12:31.547 "uuid": "f8e7b2e7-1f24-436a-9bcc-dc9e98e343b7", 00:12:31.547 "strip_size_kb": 64, 00:12:31.547 "state": "online", 00:12:31.547 "raid_level": "raid0", 00:12:31.547 "superblock": true, 00:12:31.547 "num_base_bdevs": 3, 00:12:31.547 "num_base_bdevs_discovered": 3, 00:12:31.547 "num_base_bdevs_operational": 3, 00:12:31.547 "base_bdevs_list": [ 00:12:31.547 { 00:12:31.547 "name": "BaseBdev1", 00:12:31.547 "uuid": "af22c4f6-ffc9-568c-8bbc-10cebda2c86a", 00:12:31.547 "is_configured": true, 00:12:31.547 "data_offset": 2048, 00:12:31.547 "data_size": 63488 00:12:31.547 }, 00:12:31.547 { 00:12:31.547 "name": "BaseBdev2", 00:12:31.547 "uuid": "925c7ae9-cb54-5c40-a258-cec9c9c85e48", 00:12:31.547 "is_configured": true, 00:12:31.547 "data_offset": 2048, 00:12:31.547 "data_size": 63488 00:12:31.547 }, 00:12:31.547 { 00:12:31.547 "name": "BaseBdev3", 00:12:31.547 "uuid": "56e23fb6-02d6-5091-b7bb-cf5638b46b28", 00:12:31.547 "is_configured": true, 00:12:31.547 "data_offset": 2048, 00:12:31.547 "data_size": 63488 00:12:31.547 } 00:12:31.547 ] 00:12:31.547 }' 00:12:31.547 18:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:31.547 18:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.115 18:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:32.115 [2024-07-24 18:15:40.680195] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.115 [2024-07-24 18:15:40.680226] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.115 [2024-07-24 18:15:40.682174] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.115 [2024-07-24 18:15:40.682198] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.115 [2024-07-24 18:15:40.682219] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.115 [2024-07-24 18:15:40.682226] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d038e0 name raid_bdev1, state offline 00:12:32.115 0 00:12:32.115 18:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2182686 00:12:32.115 18:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2182686 ']' 00:12:32.115 18:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2182686 00:12:32.115 18:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:32.115 18:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.115 18:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2182686 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2182686' 00:12:32.374 killing process with pid 2182686 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2182686 00:12:32.374 [2024-07-24 18:15:40.749390] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2182686 00:12:32.374 [2024-07-24 18:15:40.766599] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.yk8ihq02X0 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:12:32.374 00:12:32.374 real 0m5.481s 00:12:32.374 user 0m8.356s 00:12:32.374 sys 0m0.983s 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.374 18:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.374 ************************************ 00:12:32.374 END TEST raid_read_error_test 00:12:32.374 ************************************ 00:12:32.634 18:15:40 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:12:32.634 18:15:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:32.634 18:15:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.634 18:15:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.634 ************************************ 00:12:32.634 START TEST raid_write_error_test 00:12:32.634 ************************************ 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.fpTRXIFwRa 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2183642 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2183642 /var/tmp/spdk-raid.sock 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2183642 ']' 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:32.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:32.634 18:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.634 [2024-07-24 18:15:41.104106] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:12:32.634 [2024-07-24 18:15:41.104151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183642 ] 00:12:32.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.634 EAL: Requested device 0000:b3:01.0 cannot be used 00:12:32.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.634 EAL: Requested device 0000:b3:01.1 cannot be used 00:12:32.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.634 EAL: Requested device 0000:b3:01.2 cannot be used 00:12:32.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b3:01.3 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b3:01.4 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b3:01.5 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b3:01.6 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b3:01.7 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b3:02.0 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b3:02.1 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b3:02.2 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b3:02.3 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b3:02.4 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b3:02.5 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b3:02.6 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b3:02.7 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:01.0 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:01.1 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:01.2 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:01.3 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:01.4 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:01.5 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:01.6 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:01.7 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:02.0 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:02.1 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:02.2 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:02.3 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:02.4 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:02.5 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:02.6 cannot be used 00:12:32.635 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.635 EAL: Requested device 0000:b5:02.7 cannot be used 00:12:32.635 [2024-07-24 18:15:41.198684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.894 [2024-07-24 18:15:41.268328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.894 [2024-07-24 18:15:41.323232] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.894 [2024-07-24 18:15:41.323262] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.462 18:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.462 18:15:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:33.462 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:33.462 18:15:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.721 BaseBdev1_malloc 00:12:33.721 18:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:33.721 true 00:12:33.721 18:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:33.981 [2024-07-24 18:15:42.383594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:33.981 [2024-07-24 18:15:42.383639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.981 [2024-07-24 18:15:42.383653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26bfed0 00:12:33.981 [2024-07-24 18:15:42.383661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.981 [2024-07-24 18:15:42.384786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.981 [2024-07-24 18:15:42.384812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.981 BaseBdev1 00:12:33.981 18:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:33.981 18:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.981 BaseBdev2_malloc 00:12:34.240 18:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:34.240 true 00:12:34.240 18:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:34.500 [2024-07-24 18:15:42.920496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:34.500 [2024-07-24 18:15:42.920532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.500 [2024-07-24 18:15:42.920546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26c4b60 00:12:34.500 [2024-07-24 18:15:42.920555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.500 [2024-07-24 18:15:42.921594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.500 [2024-07-24 18:15:42.921618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:34.500 BaseBdev2 00:12:34.500 18:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:34.500 18:15:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:34.500 BaseBdev3_malloc 00:12:34.759 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:34.759 true 00:12:34.759 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:35.018 [2024-07-24 18:15:43.425316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:35.018 [2024-07-24 18:15:43.425350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.018 [2024-07-24 18:15:43.425367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26c5ad0 00:12:35.018 [2024-07-24 18:15:43.425375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.018 [2024-07-24 18:15:43.426359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.018 [2024-07-24 18:15:43.426383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:35.018 BaseBdev3 00:12:35.018 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:35.018 [2024-07-24 18:15:43.577745] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.018 [2024-07-24 18:15:43.578537] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.018 [2024-07-24 18:15:43.578581] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.018 [2024-07-24 18:15:43.578720] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x26c78e0 00:12:35.018 [2024-07-24 18:15:43.578728] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:35.018 [2024-07-24 18:15:43.578851] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x251b870 00:12:35.018 [2024-07-24 18:15:43.578947] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26c78e0 00:12:35.018 [2024-07-24 18:15:43.578953] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26c78e0 00:12:35.018 [2024-07-24 18:15:43.579015] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.018 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:35.018 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:35.018 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:35.018 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:35.018 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:35.018 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:35.018 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.018 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.018 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.018 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.018 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.018 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.276 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:35.276 "name": "raid_bdev1", 00:12:35.276 "uuid": "430ad7c5-9520-4ae8-affd-fa3718fa2577", 00:12:35.276 "strip_size_kb": 64, 00:12:35.276 "state": "online", 00:12:35.276 "raid_level": "raid0", 00:12:35.276 "superblock": true, 00:12:35.276 "num_base_bdevs": 3, 00:12:35.276 "num_base_bdevs_discovered": 3, 00:12:35.276 "num_base_bdevs_operational": 3, 00:12:35.276 "base_bdevs_list": [ 00:12:35.276 { 00:12:35.276 "name": "BaseBdev1", 00:12:35.276 "uuid": "b7d09cd5-9dfc-5ec0-8424-1d025c111747", 00:12:35.276 "is_configured": true, 00:12:35.276 "data_offset": 2048, 00:12:35.276 "data_size": 63488 00:12:35.276 }, 00:12:35.276 { 00:12:35.276 "name": "BaseBdev2", 00:12:35.276 "uuid": "7e631aeb-2fdb-593a-99e7-57d563939801", 00:12:35.276 "is_configured": true, 00:12:35.276 "data_offset": 2048, 00:12:35.276 "data_size": 63488 00:12:35.276 }, 00:12:35.276 { 00:12:35.276 "name": "BaseBdev3", 00:12:35.276 "uuid": "37fa7912-ba9c-57ad-875d-5d6c60999d95", 00:12:35.276 "is_configured": true, 00:12:35.276 "data_offset": 2048, 00:12:35.276 "data_size": 63488 00:12:35.276 } 00:12:35.276 ] 00:12:35.276 }' 00:12:35.276 18:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:35.276 18:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.844 18:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:35.844 18:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:35.844 [2024-07-24 18:15:44.335921] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2218b30 00:12:36.781 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:37.041 "name": "raid_bdev1", 00:12:37.041 "uuid": "430ad7c5-9520-4ae8-affd-fa3718fa2577", 00:12:37.041 "strip_size_kb": 64, 00:12:37.041 "state": "online", 00:12:37.041 "raid_level": "raid0", 00:12:37.041 "superblock": true, 00:12:37.041 "num_base_bdevs": 3, 00:12:37.041 "num_base_bdevs_discovered": 3, 00:12:37.041 "num_base_bdevs_operational": 3, 00:12:37.041 "base_bdevs_list": [ 00:12:37.041 { 00:12:37.041 "name": "BaseBdev1", 00:12:37.041 "uuid": "b7d09cd5-9dfc-5ec0-8424-1d025c111747", 00:12:37.041 "is_configured": true, 00:12:37.041 "data_offset": 2048, 00:12:37.041 "data_size": 63488 00:12:37.041 }, 00:12:37.041 { 00:12:37.041 "name": "BaseBdev2", 00:12:37.041 "uuid": "7e631aeb-2fdb-593a-99e7-57d563939801", 00:12:37.041 "is_configured": true, 00:12:37.041 "data_offset": 2048, 00:12:37.041 "data_size": 63488 00:12:37.041 }, 00:12:37.041 { 00:12:37.041 "name": "BaseBdev3", 00:12:37.041 "uuid": "37fa7912-ba9c-57ad-875d-5d6c60999d95", 00:12:37.041 "is_configured": true, 00:12:37.041 "data_offset": 2048, 00:12:37.041 "data_size": 63488 00:12:37.041 } 00:12:37.041 ] 00:12:37.041 }' 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:37.041 18:15:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.610 18:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:37.869 [2024-07-24 18:15:46.243620] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.869 [2024-07-24 18:15:46.243659] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.869 [2024-07-24 18:15:46.245683] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.869 [2024-07-24 18:15:46.245709] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.869 [2024-07-24 18:15:46.245731] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.869 [2024-07-24 18:15:46.245738] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26c78e0 name raid_bdev1, state offline 00:12:37.869 0 00:12:37.869 18:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2183642 00:12:37.869 18:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2183642 ']' 00:12:37.869 18:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2183642 00:12:37.869 18:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:37.869 18:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:37.869 18:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2183642 00:12:37.869 18:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:37.869 18:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:37.869 18:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2183642' 00:12:37.869 killing process with pid 2183642 00:12:37.869 18:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2183642 00:12:37.869 [2024-07-24 18:15:46.306824] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.869 18:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2183642 00:12:37.869 [2024-07-24 18:15:46.325101] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.129 18:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.fpTRXIFwRa 00:12:38.129 18:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:38.129 18:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:38.129 18:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:12:38.129 18:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:38.129 18:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:38.129 18:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:38.129 18:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:12:38.129 00:12:38.129 real 0m5.480s 00:12:38.129 user 0m8.323s 00:12:38.129 sys 0m0.987s 00:12:38.129 18:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:38.129 18:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.129 ************************************ 00:12:38.129 END TEST raid_write_error_test 00:12:38.129 ************************************ 00:12:38.129 18:15:46 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:38.129 18:15:46 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:38.129 18:15:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:38.129 18:15:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:38.129 18:15:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.129 ************************************ 00:12:38.129 START TEST raid_state_function_test 00:12:38.129 ************************************ 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:38.129 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2184773 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2184773' 00:12:38.130 Process raid pid: 2184773 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2184773 /var/tmp/spdk-raid.sock 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2184773 ']' 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:38.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:38.130 18:15:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.130 [2024-07-24 18:15:46.661412] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:12:38.130 [2024-07-24 18:15:46.661457] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:01.0 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:01.1 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:01.2 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:01.3 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:01.4 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:01.5 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:01.6 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:01.7 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:02.0 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:02.1 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:02.2 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:02.3 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:02.4 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:02.5 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:02.6 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b3:02.7 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:01.0 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:01.1 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:01.2 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:01.3 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:01.4 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:01.5 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:01.6 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:01.7 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:02.0 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:02.1 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:02.2 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:02.3 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:02.4 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:02.5 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:02.6 cannot be used 00:12:38.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:38.130 EAL: Requested device 0000:b5:02.7 cannot be used 00:12:38.390 [2024-07-24 18:15:46.755737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.390 [2024-07-24 18:15:46.822864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.390 [2024-07-24 18:15:46.873399] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.390 [2024-07-24 18:15:46.873423] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.017 18:15:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.017 18:15:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:39.017 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:39.276 [2024-07-24 18:15:47.616345] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:39.276 [2024-07-24 18:15:47.616380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:39.276 [2024-07-24 18:15:47.616387] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:39.276 [2024-07-24 18:15:47.616394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:39.276 [2024-07-24 18:15:47.616400] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:39.276 [2024-07-24 18:15:47.616407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:39.276 "name": "Existed_Raid", 00:12:39.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.276 "strip_size_kb": 64, 00:12:39.276 "state": "configuring", 00:12:39.276 "raid_level": "concat", 00:12:39.276 "superblock": false, 00:12:39.276 "num_base_bdevs": 3, 00:12:39.276 "num_base_bdevs_discovered": 0, 00:12:39.276 "num_base_bdevs_operational": 3, 00:12:39.276 "base_bdevs_list": [ 00:12:39.276 { 00:12:39.276 "name": "BaseBdev1", 00:12:39.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.276 "is_configured": false, 00:12:39.276 "data_offset": 0, 00:12:39.276 "data_size": 0 00:12:39.276 }, 00:12:39.276 { 00:12:39.276 "name": "BaseBdev2", 00:12:39.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.276 "is_configured": false, 00:12:39.276 "data_offset": 0, 00:12:39.276 "data_size": 0 00:12:39.276 }, 00:12:39.276 { 00:12:39.276 "name": "BaseBdev3", 00:12:39.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.276 "is_configured": false, 00:12:39.276 "data_offset": 0, 00:12:39.276 "data_size": 0 00:12:39.276 } 00:12:39.276 ] 00:12:39.276 }' 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:39.276 18:15:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.844 18:15:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:40.102 [2024-07-24 18:15:48.446391] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.102 [2024-07-24 18:15:48.446412] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x125b1c0 name Existed_Raid, state configuring 00:12:40.102 18:15:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:40.102 [2024-07-24 18:15:48.606823] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:40.102 [2024-07-24 18:15:48.606843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:40.102 [2024-07-24 18:15:48.606849] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:40.102 [2024-07-24 18:15:48.606857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:40.102 [2024-07-24 18:15:48.606862] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:40.102 [2024-07-24 18:15:48.606870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:40.102 18:15:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:40.361 [2024-07-24 18:15:48.783801] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.361 BaseBdev1 00:12:40.361 18:15:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:40.361 18:15:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:40.361 18:15:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:40.361 18:15:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:40.361 18:15:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:40.361 18:15:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:40.361 18:15:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:40.619 18:15:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:40.619 [ 00:12:40.619 { 00:12:40.619 "name": "BaseBdev1", 00:12:40.619 "aliases": [ 00:12:40.619 "e4ba0a27-0b84-48d1-adf6-37c39593a184" 00:12:40.619 ], 00:12:40.619 "product_name": "Malloc disk", 00:12:40.619 "block_size": 512, 00:12:40.619 "num_blocks": 65536, 00:12:40.619 "uuid": "e4ba0a27-0b84-48d1-adf6-37c39593a184", 00:12:40.619 "assigned_rate_limits": { 00:12:40.619 "rw_ios_per_sec": 0, 00:12:40.619 "rw_mbytes_per_sec": 0, 00:12:40.619 "r_mbytes_per_sec": 0, 00:12:40.619 "w_mbytes_per_sec": 0 00:12:40.619 }, 00:12:40.619 "claimed": true, 00:12:40.619 "claim_type": "exclusive_write", 00:12:40.619 "zoned": false, 00:12:40.619 "supported_io_types": { 00:12:40.619 "read": true, 00:12:40.619 "write": true, 00:12:40.619 "unmap": true, 00:12:40.619 "flush": true, 00:12:40.619 "reset": true, 00:12:40.619 "nvme_admin": false, 00:12:40.619 "nvme_io": false, 00:12:40.619 "nvme_io_md": false, 00:12:40.619 "write_zeroes": true, 00:12:40.619 "zcopy": true, 00:12:40.619 "get_zone_info": false, 00:12:40.619 "zone_management": false, 00:12:40.619 "zone_append": false, 00:12:40.619 "compare": false, 00:12:40.619 "compare_and_write": false, 00:12:40.619 "abort": true, 00:12:40.619 "seek_hole": false, 00:12:40.619 "seek_data": false, 00:12:40.619 "copy": true, 00:12:40.619 "nvme_iov_md": false 00:12:40.619 }, 00:12:40.619 "memory_domains": [ 00:12:40.619 { 00:12:40.619 "dma_device_id": "system", 00:12:40.619 "dma_device_type": 1 00:12:40.619 }, 00:12:40.619 { 00:12:40.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.619 "dma_device_type": 2 00:12:40.619 } 00:12:40.619 ], 00:12:40.619 "driver_specific": {} 00:12:40.619 } 00:12:40.619 ] 00:12:40.619 18:15:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:40.619 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:40.619 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:40.620 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:40.620 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:40.620 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:40.620 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:40.620 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:40.620 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:40.620 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:40.620 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:40.620 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.620 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.878 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:40.878 "name": "Existed_Raid", 00:12:40.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.878 "strip_size_kb": 64, 00:12:40.878 "state": "configuring", 00:12:40.878 "raid_level": "concat", 00:12:40.878 "superblock": false, 00:12:40.878 "num_base_bdevs": 3, 00:12:40.878 "num_base_bdevs_discovered": 1, 00:12:40.878 "num_base_bdevs_operational": 3, 00:12:40.878 "base_bdevs_list": [ 00:12:40.878 { 00:12:40.878 "name": "BaseBdev1", 00:12:40.878 "uuid": "e4ba0a27-0b84-48d1-adf6-37c39593a184", 00:12:40.878 "is_configured": true, 00:12:40.878 "data_offset": 0, 00:12:40.878 "data_size": 65536 00:12:40.878 }, 00:12:40.878 { 00:12:40.878 "name": "BaseBdev2", 00:12:40.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.878 "is_configured": false, 00:12:40.878 "data_offset": 0, 00:12:40.878 "data_size": 0 00:12:40.878 }, 00:12:40.878 { 00:12:40.878 "name": "BaseBdev3", 00:12:40.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.878 "is_configured": false, 00:12:40.878 "data_offset": 0, 00:12:40.878 "data_size": 0 00:12:40.878 } 00:12:40.878 ] 00:12:40.878 }' 00:12:40.878 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:40.878 18:15:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.445 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:41.445 [2024-07-24 18:15:49.902687] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:41.445 [2024-07-24 18:15:49.902718] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x125aa90 name Existed_Raid, state configuring 00:12:41.445 18:15:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:41.704 [2024-07-24 18:15:50.083186] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.704 [2024-07-24 18:15:50.084332] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:41.704 [2024-07-24 18:15:50.084360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:41.704 [2024-07-24 18:15:50.084368] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:41.704 [2024-07-24 18:15:50.084375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.704 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.704 "name": "Existed_Raid", 00:12:41.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.704 "strip_size_kb": 64, 00:12:41.704 "state": "configuring", 00:12:41.704 "raid_level": "concat", 00:12:41.704 "superblock": false, 00:12:41.704 "num_base_bdevs": 3, 00:12:41.704 "num_base_bdevs_discovered": 1, 00:12:41.704 "num_base_bdevs_operational": 3, 00:12:41.704 "base_bdevs_list": [ 00:12:41.704 { 00:12:41.704 "name": "BaseBdev1", 00:12:41.705 "uuid": "e4ba0a27-0b84-48d1-adf6-37c39593a184", 00:12:41.705 "is_configured": true, 00:12:41.705 "data_offset": 0, 00:12:41.705 "data_size": 65536 00:12:41.705 }, 00:12:41.705 { 00:12:41.705 "name": "BaseBdev2", 00:12:41.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.705 "is_configured": false, 00:12:41.705 "data_offset": 0, 00:12:41.705 "data_size": 0 00:12:41.705 }, 00:12:41.705 { 00:12:41.705 "name": "BaseBdev3", 00:12:41.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.705 "is_configured": false, 00:12:41.705 "data_offset": 0, 00:12:41.705 "data_size": 0 00:12:41.705 } 00:12:41.705 ] 00:12:41.705 }' 00:12:41.705 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.705 18:15:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.271 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:42.528 [2024-07-24 18:15:50.948231] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.528 BaseBdev2 00:12:42.528 18:15:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:42.528 18:15:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:42.528 18:15:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:42.528 18:15:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:42.528 18:15:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:42.528 18:15:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:42.528 18:15:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:42.786 [ 00:12:42.786 { 00:12:42.786 "name": "BaseBdev2", 00:12:42.786 "aliases": [ 00:12:42.786 "52baefe4-ac0f-4bd4-9ffa-20be75c42912" 00:12:42.786 ], 00:12:42.786 "product_name": "Malloc disk", 00:12:42.786 "block_size": 512, 00:12:42.786 "num_blocks": 65536, 00:12:42.786 "uuid": "52baefe4-ac0f-4bd4-9ffa-20be75c42912", 00:12:42.786 "assigned_rate_limits": { 00:12:42.786 "rw_ios_per_sec": 0, 00:12:42.786 "rw_mbytes_per_sec": 0, 00:12:42.786 "r_mbytes_per_sec": 0, 00:12:42.786 "w_mbytes_per_sec": 0 00:12:42.786 }, 00:12:42.786 "claimed": true, 00:12:42.786 "claim_type": "exclusive_write", 00:12:42.786 "zoned": false, 00:12:42.786 "supported_io_types": { 00:12:42.786 "read": true, 00:12:42.786 "write": true, 00:12:42.786 "unmap": true, 00:12:42.786 "flush": true, 00:12:42.786 "reset": true, 00:12:42.786 "nvme_admin": false, 00:12:42.786 "nvme_io": false, 00:12:42.786 "nvme_io_md": false, 00:12:42.786 "write_zeroes": true, 00:12:42.786 "zcopy": true, 00:12:42.786 "get_zone_info": false, 00:12:42.786 "zone_management": false, 00:12:42.786 "zone_append": false, 00:12:42.786 "compare": false, 00:12:42.786 "compare_and_write": false, 00:12:42.786 "abort": true, 00:12:42.786 "seek_hole": false, 00:12:42.786 "seek_data": false, 00:12:42.786 "copy": true, 00:12:42.786 "nvme_iov_md": false 00:12:42.786 }, 00:12:42.786 "memory_domains": [ 00:12:42.786 { 00:12:42.786 "dma_device_id": "system", 00:12:42.786 "dma_device_type": 1 00:12:42.786 }, 00:12:42.786 { 00:12:42.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.786 "dma_device_type": 2 00:12:42.786 } 00:12:42.786 ], 00:12:42.786 "driver_specific": {} 00:12:42.786 } 00:12:42.786 ] 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.786 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.045 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:43.045 "name": "Existed_Raid", 00:12:43.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.045 "strip_size_kb": 64, 00:12:43.045 "state": "configuring", 00:12:43.045 "raid_level": "concat", 00:12:43.045 "superblock": false, 00:12:43.045 "num_base_bdevs": 3, 00:12:43.045 "num_base_bdevs_discovered": 2, 00:12:43.045 "num_base_bdevs_operational": 3, 00:12:43.045 "base_bdevs_list": [ 00:12:43.045 { 00:12:43.045 "name": "BaseBdev1", 00:12:43.045 "uuid": "e4ba0a27-0b84-48d1-adf6-37c39593a184", 00:12:43.045 "is_configured": true, 00:12:43.045 "data_offset": 0, 00:12:43.045 "data_size": 65536 00:12:43.045 }, 00:12:43.045 { 00:12:43.045 "name": "BaseBdev2", 00:12:43.045 "uuid": "52baefe4-ac0f-4bd4-9ffa-20be75c42912", 00:12:43.045 "is_configured": true, 00:12:43.045 "data_offset": 0, 00:12:43.045 "data_size": 65536 00:12:43.045 }, 00:12:43.045 { 00:12:43.045 "name": "BaseBdev3", 00:12:43.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.045 "is_configured": false, 00:12:43.045 "data_offset": 0, 00:12:43.045 "data_size": 0 00:12:43.045 } 00:12:43.045 ] 00:12:43.045 }' 00:12:43.045 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:43.045 18:15:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.622 18:15:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:43.622 [2024-07-24 18:15:52.122034] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:43.622 [2024-07-24 18:15:52.122062] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x125b980 00:12:43.622 [2024-07-24 18:15:52.122067] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:43.622 [2024-07-24 18:15:52.122195] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x125b650 00:12:43.622 [2024-07-24 18:15:52.122282] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x125b980 00:12:43.622 [2024-07-24 18:15:52.122289] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x125b980 00:12:43.622 [2024-07-24 18:15:52.122412] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.622 BaseBdev3 00:12:43.622 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:43.622 18:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:43.622 18:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:43.622 18:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:43.622 18:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:43.622 18:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:43.622 18:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:43.880 18:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:43.880 [ 00:12:43.880 { 00:12:43.880 "name": "BaseBdev3", 00:12:43.880 "aliases": [ 00:12:43.880 "3f46efbe-3394-4c8e-8ae4-edb70c40e499" 00:12:43.880 ], 00:12:43.880 "product_name": "Malloc disk", 00:12:43.880 "block_size": 512, 00:12:43.880 "num_blocks": 65536, 00:12:43.880 "uuid": "3f46efbe-3394-4c8e-8ae4-edb70c40e499", 00:12:43.880 "assigned_rate_limits": { 00:12:43.880 "rw_ios_per_sec": 0, 00:12:43.880 "rw_mbytes_per_sec": 0, 00:12:43.880 "r_mbytes_per_sec": 0, 00:12:43.880 "w_mbytes_per_sec": 0 00:12:43.880 }, 00:12:43.880 "claimed": true, 00:12:43.880 "claim_type": "exclusive_write", 00:12:43.880 "zoned": false, 00:12:43.880 "supported_io_types": { 00:12:43.880 "read": true, 00:12:43.880 "write": true, 00:12:43.880 "unmap": true, 00:12:43.880 "flush": true, 00:12:43.880 "reset": true, 00:12:43.880 "nvme_admin": false, 00:12:43.880 "nvme_io": false, 00:12:43.880 "nvme_io_md": false, 00:12:43.880 "write_zeroes": true, 00:12:43.880 "zcopy": true, 00:12:43.880 "get_zone_info": false, 00:12:43.880 "zone_management": false, 00:12:43.880 "zone_append": false, 00:12:43.880 "compare": false, 00:12:43.880 "compare_and_write": false, 00:12:43.880 "abort": true, 00:12:43.880 "seek_hole": false, 00:12:43.880 "seek_data": false, 00:12:43.880 "copy": true, 00:12:43.880 "nvme_iov_md": false 00:12:43.880 }, 00:12:43.880 "memory_domains": [ 00:12:43.880 { 00:12:43.880 "dma_device_id": "system", 00:12:43.880 "dma_device_type": 1 00:12:43.880 }, 00:12:43.880 { 00:12:43.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.880 "dma_device_type": 2 00:12:43.880 } 00:12:43.880 ], 00:12:43.880 "driver_specific": {} 00:12:43.880 } 00:12:43.880 ] 00:12:44.138 18:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:44.138 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:44.138 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:44.138 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:44.138 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:44.138 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:44.138 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:44.138 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:44.138 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:44.138 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:44.138 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:44.138 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:44.139 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:44.139 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.139 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.139 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:44.139 "name": "Existed_Raid", 00:12:44.139 "uuid": "a272bff5-28a7-4348-bef3-35a27b2368a1", 00:12:44.139 "strip_size_kb": 64, 00:12:44.139 "state": "online", 00:12:44.139 "raid_level": "concat", 00:12:44.139 "superblock": false, 00:12:44.139 "num_base_bdevs": 3, 00:12:44.139 "num_base_bdevs_discovered": 3, 00:12:44.139 "num_base_bdevs_operational": 3, 00:12:44.139 "base_bdevs_list": [ 00:12:44.139 { 00:12:44.139 "name": "BaseBdev1", 00:12:44.139 "uuid": "e4ba0a27-0b84-48d1-adf6-37c39593a184", 00:12:44.139 "is_configured": true, 00:12:44.139 "data_offset": 0, 00:12:44.139 "data_size": 65536 00:12:44.139 }, 00:12:44.139 { 00:12:44.139 "name": "BaseBdev2", 00:12:44.139 "uuid": "52baefe4-ac0f-4bd4-9ffa-20be75c42912", 00:12:44.139 "is_configured": true, 00:12:44.139 "data_offset": 0, 00:12:44.139 "data_size": 65536 00:12:44.139 }, 00:12:44.139 { 00:12:44.139 "name": "BaseBdev3", 00:12:44.139 "uuid": "3f46efbe-3394-4c8e-8ae4-edb70c40e499", 00:12:44.139 "is_configured": true, 00:12:44.139 "data_offset": 0, 00:12:44.139 "data_size": 65536 00:12:44.139 } 00:12:44.139 ] 00:12:44.139 }' 00:12:44.139 18:15:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:44.139 18:15:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.706 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:44.706 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:44.706 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:44.706 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:44.706 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:44.706 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:44.706 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:44.706 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:44.706 [2024-07-24 18:15:53.289245] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.966 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:44.966 "name": "Existed_Raid", 00:12:44.967 "aliases": [ 00:12:44.967 "a272bff5-28a7-4348-bef3-35a27b2368a1" 00:12:44.967 ], 00:12:44.967 "product_name": "Raid Volume", 00:12:44.967 "block_size": 512, 00:12:44.967 "num_blocks": 196608, 00:12:44.967 "uuid": "a272bff5-28a7-4348-bef3-35a27b2368a1", 00:12:44.967 "assigned_rate_limits": { 00:12:44.967 "rw_ios_per_sec": 0, 00:12:44.967 "rw_mbytes_per_sec": 0, 00:12:44.967 "r_mbytes_per_sec": 0, 00:12:44.967 "w_mbytes_per_sec": 0 00:12:44.967 }, 00:12:44.967 "claimed": false, 00:12:44.967 "zoned": false, 00:12:44.967 "supported_io_types": { 00:12:44.967 "read": true, 00:12:44.967 "write": true, 00:12:44.967 "unmap": true, 00:12:44.967 "flush": true, 00:12:44.967 "reset": true, 00:12:44.967 "nvme_admin": false, 00:12:44.967 "nvme_io": false, 00:12:44.967 "nvme_io_md": false, 00:12:44.967 "write_zeroes": true, 00:12:44.967 "zcopy": false, 00:12:44.967 "get_zone_info": false, 00:12:44.967 "zone_management": false, 00:12:44.967 "zone_append": false, 00:12:44.967 "compare": false, 00:12:44.967 "compare_and_write": false, 00:12:44.967 "abort": false, 00:12:44.967 "seek_hole": false, 00:12:44.967 "seek_data": false, 00:12:44.967 "copy": false, 00:12:44.967 "nvme_iov_md": false 00:12:44.967 }, 00:12:44.967 "memory_domains": [ 00:12:44.967 { 00:12:44.967 "dma_device_id": "system", 00:12:44.967 "dma_device_type": 1 00:12:44.967 }, 00:12:44.967 { 00:12:44.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.967 "dma_device_type": 2 00:12:44.967 }, 00:12:44.967 { 00:12:44.967 "dma_device_id": "system", 00:12:44.967 "dma_device_type": 1 00:12:44.967 }, 00:12:44.967 { 00:12:44.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.967 "dma_device_type": 2 00:12:44.967 }, 00:12:44.967 { 00:12:44.967 "dma_device_id": "system", 00:12:44.967 "dma_device_type": 1 00:12:44.967 }, 00:12:44.967 { 00:12:44.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.967 "dma_device_type": 2 00:12:44.967 } 00:12:44.967 ], 00:12:44.967 "driver_specific": { 00:12:44.967 "raid": { 00:12:44.967 "uuid": "a272bff5-28a7-4348-bef3-35a27b2368a1", 00:12:44.967 "strip_size_kb": 64, 00:12:44.967 "state": "online", 00:12:44.967 "raid_level": "concat", 00:12:44.967 "superblock": false, 00:12:44.967 "num_base_bdevs": 3, 00:12:44.967 "num_base_bdevs_discovered": 3, 00:12:44.967 "num_base_bdevs_operational": 3, 00:12:44.967 "base_bdevs_list": [ 00:12:44.967 { 00:12:44.967 "name": "BaseBdev1", 00:12:44.967 "uuid": "e4ba0a27-0b84-48d1-adf6-37c39593a184", 00:12:44.967 "is_configured": true, 00:12:44.967 "data_offset": 0, 00:12:44.967 "data_size": 65536 00:12:44.967 }, 00:12:44.967 { 00:12:44.967 "name": "BaseBdev2", 00:12:44.967 "uuid": "52baefe4-ac0f-4bd4-9ffa-20be75c42912", 00:12:44.967 "is_configured": true, 00:12:44.967 "data_offset": 0, 00:12:44.967 "data_size": 65536 00:12:44.967 }, 00:12:44.967 { 00:12:44.967 "name": "BaseBdev3", 00:12:44.967 "uuid": "3f46efbe-3394-4c8e-8ae4-edb70c40e499", 00:12:44.967 "is_configured": true, 00:12:44.967 "data_offset": 0, 00:12:44.967 "data_size": 65536 00:12:44.967 } 00:12:44.967 ] 00:12:44.967 } 00:12:44.967 } 00:12:44.967 }' 00:12:44.967 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:44.967 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:44.967 BaseBdev2 00:12:44.967 BaseBdev3' 00:12:44.967 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:44.967 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:44.967 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:44.967 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:44.967 "name": "BaseBdev1", 00:12:44.967 "aliases": [ 00:12:44.967 "e4ba0a27-0b84-48d1-adf6-37c39593a184" 00:12:44.967 ], 00:12:44.967 "product_name": "Malloc disk", 00:12:44.967 "block_size": 512, 00:12:44.967 "num_blocks": 65536, 00:12:44.967 "uuid": "e4ba0a27-0b84-48d1-adf6-37c39593a184", 00:12:44.967 "assigned_rate_limits": { 00:12:44.967 "rw_ios_per_sec": 0, 00:12:44.967 "rw_mbytes_per_sec": 0, 00:12:44.967 "r_mbytes_per_sec": 0, 00:12:44.967 "w_mbytes_per_sec": 0 00:12:44.967 }, 00:12:44.967 "claimed": true, 00:12:44.967 "claim_type": "exclusive_write", 00:12:44.967 "zoned": false, 00:12:44.967 "supported_io_types": { 00:12:44.967 "read": true, 00:12:44.967 "write": true, 00:12:44.967 "unmap": true, 00:12:44.967 "flush": true, 00:12:44.967 "reset": true, 00:12:44.967 "nvme_admin": false, 00:12:44.967 "nvme_io": false, 00:12:44.967 "nvme_io_md": false, 00:12:44.967 "write_zeroes": true, 00:12:44.967 "zcopy": true, 00:12:44.967 "get_zone_info": false, 00:12:44.967 "zone_management": false, 00:12:44.967 "zone_append": false, 00:12:44.967 "compare": false, 00:12:44.967 "compare_and_write": false, 00:12:44.967 "abort": true, 00:12:44.967 "seek_hole": false, 00:12:44.967 "seek_data": false, 00:12:44.967 "copy": true, 00:12:44.967 "nvme_iov_md": false 00:12:44.967 }, 00:12:44.967 "memory_domains": [ 00:12:44.967 { 00:12:44.967 "dma_device_id": "system", 00:12:44.967 "dma_device_type": 1 00:12:44.967 }, 00:12:44.967 { 00:12:44.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.967 "dma_device_type": 2 00:12:44.967 } 00:12:44.967 ], 00:12:44.967 "driver_specific": {} 00:12:44.967 }' 00:12:44.967 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:45.226 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:45.226 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:45.226 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:45.226 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:45.226 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:45.226 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:45.226 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:45.226 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:45.226 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:45.484 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:45.484 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:45.484 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:45.484 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:45.484 18:15:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:45.484 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:45.484 "name": "BaseBdev2", 00:12:45.484 "aliases": [ 00:12:45.485 "52baefe4-ac0f-4bd4-9ffa-20be75c42912" 00:12:45.485 ], 00:12:45.485 "product_name": "Malloc disk", 00:12:45.485 "block_size": 512, 00:12:45.485 "num_blocks": 65536, 00:12:45.485 "uuid": "52baefe4-ac0f-4bd4-9ffa-20be75c42912", 00:12:45.485 "assigned_rate_limits": { 00:12:45.485 "rw_ios_per_sec": 0, 00:12:45.485 "rw_mbytes_per_sec": 0, 00:12:45.485 "r_mbytes_per_sec": 0, 00:12:45.485 "w_mbytes_per_sec": 0 00:12:45.485 }, 00:12:45.485 "claimed": true, 00:12:45.485 "claim_type": "exclusive_write", 00:12:45.485 "zoned": false, 00:12:45.485 "supported_io_types": { 00:12:45.485 "read": true, 00:12:45.485 "write": true, 00:12:45.485 "unmap": true, 00:12:45.485 "flush": true, 00:12:45.485 "reset": true, 00:12:45.485 "nvme_admin": false, 00:12:45.485 "nvme_io": false, 00:12:45.485 "nvme_io_md": false, 00:12:45.485 "write_zeroes": true, 00:12:45.485 "zcopy": true, 00:12:45.485 "get_zone_info": false, 00:12:45.485 "zone_management": false, 00:12:45.485 "zone_append": false, 00:12:45.485 "compare": false, 00:12:45.485 "compare_and_write": false, 00:12:45.485 "abort": true, 00:12:45.485 "seek_hole": false, 00:12:45.485 "seek_data": false, 00:12:45.485 "copy": true, 00:12:45.485 "nvme_iov_md": false 00:12:45.485 }, 00:12:45.485 "memory_domains": [ 00:12:45.485 { 00:12:45.485 "dma_device_id": "system", 00:12:45.485 "dma_device_type": 1 00:12:45.485 }, 00:12:45.485 { 00:12:45.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.485 "dma_device_type": 2 00:12:45.485 } 00:12:45.485 ], 00:12:45.485 "driver_specific": {} 00:12:45.485 }' 00:12:45.485 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:45.485 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:45.743 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:45.743 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:45.743 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:45.743 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:45.743 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:45.743 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:45.743 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:45.743 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:45.743 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:45.743 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:45.743 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:45.743 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:45.743 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:46.001 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:46.001 "name": "BaseBdev3", 00:12:46.001 "aliases": [ 00:12:46.002 "3f46efbe-3394-4c8e-8ae4-edb70c40e499" 00:12:46.002 ], 00:12:46.002 "product_name": "Malloc disk", 00:12:46.002 "block_size": 512, 00:12:46.002 "num_blocks": 65536, 00:12:46.002 "uuid": "3f46efbe-3394-4c8e-8ae4-edb70c40e499", 00:12:46.002 "assigned_rate_limits": { 00:12:46.002 "rw_ios_per_sec": 0, 00:12:46.002 "rw_mbytes_per_sec": 0, 00:12:46.002 "r_mbytes_per_sec": 0, 00:12:46.002 "w_mbytes_per_sec": 0 00:12:46.002 }, 00:12:46.002 "claimed": true, 00:12:46.002 "claim_type": "exclusive_write", 00:12:46.002 "zoned": false, 00:12:46.002 "supported_io_types": { 00:12:46.002 "read": true, 00:12:46.002 "write": true, 00:12:46.002 "unmap": true, 00:12:46.002 "flush": true, 00:12:46.002 "reset": true, 00:12:46.002 "nvme_admin": false, 00:12:46.002 "nvme_io": false, 00:12:46.002 "nvme_io_md": false, 00:12:46.002 "write_zeroes": true, 00:12:46.002 "zcopy": true, 00:12:46.002 "get_zone_info": false, 00:12:46.002 "zone_management": false, 00:12:46.002 "zone_append": false, 00:12:46.002 "compare": false, 00:12:46.002 "compare_and_write": false, 00:12:46.002 "abort": true, 00:12:46.002 "seek_hole": false, 00:12:46.002 "seek_data": false, 00:12:46.002 "copy": true, 00:12:46.002 "nvme_iov_md": false 00:12:46.002 }, 00:12:46.002 "memory_domains": [ 00:12:46.002 { 00:12:46.002 "dma_device_id": "system", 00:12:46.002 "dma_device_type": 1 00:12:46.002 }, 00:12:46.002 { 00:12:46.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.002 "dma_device_type": 2 00:12:46.002 } 00:12:46.002 ], 00:12:46.002 "driver_specific": {} 00:12:46.002 }' 00:12:46.002 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:46.002 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:46.002 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:46.002 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:46.260 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:46.260 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:46.260 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:46.260 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:46.260 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:46.260 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:46.260 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:46.260 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:46.260 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:46.519 [2024-07-24 18:15:54.965419] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.519 [2024-07-24 18:15:54.965440] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.519 [2024-07-24 18:15:54.965470] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.519 18:15:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.777 18:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:46.777 "name": "Existed_Raid", 00:12:46.777 "uuid": "a272bff5-28a7-4348-bef3-35a27b2368a1", 00:12:46.777 "strip_size_kb": 64, 00:12:46.777 "state": "offline", 00:12:46.777 "raid_level": "concat", 00:12:46.777 "superblock": false, 00:12:46.777 "num_base_bdevs": 3, 00:12:46.777 "num_base_bdevs_discovered": 2, 00:12:46.777 "num_base_bdevs_operational": 2, 00:12:46.777 "base_bdevs_list": [ 00:12:46.777 { 00:12:46.777 "name": null, 00:12:46.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.778 "is_configured": false, 00:12:46.778 "data_offset": 0, 00:12:46.778 "data_size": 65536 00:12:46.778 }, 00:12:46.778 { 00:12:46.778 "name": "BaseBdev2", 00:12:46.778 "uuid": "52baefe4-ac0f-4bd4-9ffa-20be75c42912", 00:12:46.778 "is_configured": true, 00:12:46.778 "data_offset": 0, 00:12:46.778 "data_size": 65536 00:12:46.778 }, 00:12:46.778 { 00:12:46.778 "name": "BaseBdev3", 00:12:46.778 "uuid": "3f46efbe-3394-4c8e-8ae4-edb70c40e499", 00:12:46.778 "is_configured": true, 00:12:46.778 "data_offset": 0, 00:12:46.778 "data_size": 65536 00:12:46.778 } 00:12:46.778 ] 00:12:46.778 }' 00:12:46.778 18:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:46.778 18:15:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.344 18:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:47.344 18:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:47.344 18:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.344 18:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:47.344 18:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:47.344 18:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:47.344 18:15:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:47.602 [2024-07-24 18:15:55.996945] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:47.602 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:47.602 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:47.602 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.602 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:47.602 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:47.602 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:47.602 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:47.860 [2024-07-24 18:15:56.343020] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:47.860 [2024-07-24 18:15:56.343054] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x125b980 name Existed_Raid, state offline 00:12:47.860 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:47.860 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:47.860 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.860 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:48.119 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:48.119 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:48.119 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:48.119 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:48.119 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:48.119 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:48.119 BaseBdev2 00:12:48.119 18:15:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:48.119 18:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:48.119 18:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:48.119 18:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:48.119 18:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:48.119 18:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:48.119 18:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:48.377 18:15:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:48.635 [ 00:12:48.635 { 00:12:48.635 "name": "BaseBdev2", 00:12:48.635 "aliases": [ 00:12:48.636 "abf3034c-d0af-49ee-aee7-ab0a27d9a1da" 00:12:48.636 ], 00:12:48.636 "product_name": "Malloc disk", 00:12:48.636 "block_size": 512, 00:12:48.636 "num_blocks": 65536, 00:12:48.636 "uuid": "abf3034c-d0af-49ee-aee7-ab0a27d9a1da", 00:12:48.636 "assigned_rate_limits": { 00:12:48.636 "rw_ios_per_sec": 0, 00:12:48.636 "rw_mbytes_per_sec": 0, 00:12:48.636 "r_mbytes_per_sec": 0, 00:12:48.636 "w_mbytes_per_sec": 0 00:12:48.636 }, 00:12:48.636 "claimed": false, 00:12:48.636 "zoned": false, 00:12:48.636 "supported_io_types": { 00:12:48.636 "read": true, 00:12:48.636 "write": true, 00:12:48.636 "unmap": true, 00:12:48.636 "flush": true, 00:12:48.636 "reset": true, 00:12:48.636 "nvme_admin": false, 00:12:48.636 "nvme_io": false, 00:12:48.636 "nvme_io_md": false, 00:12:48.636 "write_zeroes": true, 00:12:48.636 "zcopy": true, 00:12:48.636 "get_zone_info": false, 00:12:48.636 "zone_management": false, 00:12:48.636 "zone_append": false, 00:12:48.636 "compare": false, 00:12:48.636 "compare_and_write": false, 00:12:48.636 "abort": true, 00:12:48.636 "seek_hole": false, 00:12:48.636 "seek_data": false, 00:12:48.636 "copy": true, 00:12:48.636 "nvme_iov_md": false 00:12:48.636 }, 00:12:48.636 "memory_domains": [ 00:12:48.636 { 00:12:48.636 "dma_device_id": "system", 00:12:48.636 "dma_device_type": 1 00:12:48.636 }, 00:12:48.636 { 00:12:48.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.636 "dma_device_type": 2 00:12:48.636 } 00:12:48.636 ], 00:12:48.636 "driver_specific": {} 00:12:48.636 } 00:12:48.636 ] 00:12:48.636 18:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:48.636 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:48.636 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:48.636 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:48.636 BaseBdev3 00:12:48.636 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:48.636 18:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:48.894 18:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:48.894 18:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:48.894 18:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:48.894 18:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:48.894 18:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:48.894 18:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:49.153 [ 00:12:49.153 { 00:12:49.153 "name": "BaseBdev3", 00:12:49.153 "aliases": [ 00:12:49.153 "a8893636-01e8-483f-886e-2a5bc1cd89a4" 00:12:49.153 ], 00:12:49.153 "product_name": "Malloc disk", 00:12:49.153 "block_size": 512, 00:12:49.153 "num_blocks": 65536, 00:12:49.153 "uuid": "a8893636-01e8-483f-886e-2a5bc1cd89a4", 00:12:49.153 "assigned_rate_limits": { 00:12:49.153 "rw_ios_per_sec": 0, 00:12:49.153 "rw_mbytes_per_sec": 0, 00:12:49.153 "r_mbytes_per_sec": 0, 00:12:49.153 "w_mbytes_per_sec": 0 00:12:49.153 }, 00:12:49.153 "claimed": false, 00:12:49.153 "zoned": false, 00:12:49.153 "supported_io_types": { 00:12:49.153 "read": true, 00:12:49.153 "write": true, 00:12:49.153 "unmap": true, 00:12:49.153 "flush": true, 00:12:49.153 "reset": true, 00:12:49.153 "nvme_admin": false, 00:12:49.153 "nvme_io": false, 00:12:49.153 "nvme_io_md": false, 00:12:49.153 "write_zeroes": true, 00:12:49.153 "zcopy": true, 00:12:49.153 "get_zone_info": false, 00:12:49.153 "zone_management": false, 00:12:49.153 "zone_append": false, 00:12:49.153 "compare": false, 00:12:49.153 "compare_and_write": false, 00:12:49.153 "abort": true, 00:12:49.153 "seek_hole": false, 00:12:49.153 "seek_data": false, 00:12:49.153 "copy": true, 00:12:49.153 "nvme_iov_md": false 00:12:49.153 }, 00:12:49.153 "memory_domains": [ 00:12:49.153 { 00:12:49.153 "dma_device_id": "system", 00:12:49.153 "dma_device_type": 1 00:12:49.153 }, 00:12:49.153 { 00:12:49.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.153 "dma_device_type": 2 00:12:49.153 } 00:12:49.153 ], 00:12:49.153 "driver_specific": {} 00:12:49.153 } 00:12:49.153 ] 00:12:49.153 18:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:49.153 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:49.153 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:49.153 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:49.153 [2024-07-24 18:15:57.736007] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.153 [2024-07-24 18:15:57.736042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.153 [2024-07-24 18:15:57.736056] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.153 [2024-07-24 18:15:57.737010] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:49.412 "name": "Existed_Raid", 00:12:49.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.412 "strip_size_kb": 64, 00:12:49.412 "state": "configuring", 00:12:49.412 "raid_level": "concat", 00:12:49.412 "superblock": false, 00:12:49.412 "num_base_bdevs": 3, 00:12:49.412 "num_base_bdevs_discovered": 2, 00:12:49.412 "num_base_bdevs_operational": 3, 00:12:49.412 "base_bdevs_list": [ 00:12:49.412 { 00:12:49.412 "name": "BaseBdev1", 00:12:49.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.412 "is_configured": false, 00:12:49.412 "data_offset": 0, 00:12:49.412 "data_size": 0 00:12:49.412 }, 00:12:49.412 { 00:12:49.412 "name": "BaseBdev2", 00:12:49.412 "uuid": "abf3034c-d0af-49ee-aee7-ab0a27d9a1da", 00:12:49.412 "is_configured": true, 00:12:49.412 "data_offset": 0, 00:12:49.412 "data_size": 65536 00:12:49.412 }, 00:12:49.412 { 00:12:49.412 "name": "BaseBdev3", 00:12:49.412 "uuid": "a8893636-01e8-483f-886e-2a5bc1cd89a4", 00:12:49.412 "is_configured": true, 00:12:49.412 "data_offset": 0, 00:12:49.412 "data_size": 65536 00:12:49.412 } 00:12:49.412 ] 00:12:49.412 }' 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:49.412 18:15:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.978 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:49.978 [2024-07-24 18:15:58.477908] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:49.978 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:49.978 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:49.978 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:49.978 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:49.978 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:49.978 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:49.978 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:49.978 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:49.978 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:49.978 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:49.978 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.978 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.237 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:50.237 "name": "Existed_Raid", 00:12:50.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.237 "strip_size_kb": 64, 00:12:50.237 "state": "configuring", 00:12:50.237 "raid_level": "concat", 00:12:50.237 "superblock": false, 00:12:50.237 "num_base_bdevs": 3, 00:12:50.237 "num_base_bdevs_discovered": 1, 00:12:50.237 "num_base_bdevs_operational": 3, 00:12:50.237 "base_bdevs_list": [ 00:12:50.237 { 00:12:50.237 "name": "BaseBdev1", 00:12:50.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.237 "is_configured": false, 00:12:50.237 "data_offset": 0, 00:12:50.237 "data_size": 0 00:12:50.237 }, 00:12:50.237 { 00:12:50.237 "name": null, 00:12:50.237 "uuid": "abf3034c-d0af-49ee-aee7-ab0a27d9a1da", 00:12:50.237 "is_configured": false, 00:12:50.237 "data_offset": 0, 00:12:50.237 "data_size": 65536 00:12:50.237 }, 00:12:50.237 { 00:12:50.237 "name": "BaseBdev3", 00:12:50.237 "uuid": "a8893636-01e8-483f-886e-2a5bc1cd89a4", 00:12:50.237 "is_configured": true, 00:12:50.237 "data_offset": 0, 00:12:50.237 "data_size": 65536 00:12:50.237 } 00:12:50.237 ] 00:12:50.237 }' 00:12:50.237 18:15:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:50.237 18:15:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.804 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.804 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:50.804 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:50.804 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:51.062 [2024-07-24 18:15:59.503538] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.062 BaseBdev1 00:12:51.062 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:51.062 18:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:51.062 18:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:51.062 18:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:51.062 18:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:51.062 18:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:51.062 18:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:51.322 [ 00:12:51.322 { 00:12:51.322 "name": "BaseBdev1", 00:12:51.322 "aliases": [ 00:12:51.322 "f5fce867-7d30-42b2-ad8e-a95be482ecb7" 00:12:51.322 ], 00:12:51.322 "product_name": "Malloc disk", 00:12:51.322 "block_size": 512, 00:12:51.322 "num_blocks": 65536, 00:12:51.322 "uuid": "f5fce867-7d30-42b2-ad8e-a95be482ecb7", 00:12:51.322 "assigned_rate_limits": { 00:12:51.322 "rw_ios_per_sec": 0, 00:12:51.322 "rw_mbytes_per_sec": 0, 00:12:51.322 "r_mbytes_per_sec": 0, 00:12:51.322 "w_mbytes_per_sec": 0 00:12:51.322 }, 00:12:51.322 "claimed": true, 00:12:51.322 "claim_type": "exclusive_write", 00:12:51.322 "zoned": false, 00:12:51.322 "supported_io_types": { 00:12:51.322 "read": true, 00:12:51.322 "write": true, 00:12:51.322 "unmap": true, 00:12:51.322 "flush": true, 00:12:51.322 "reset": true, 00:12:51.322 "nvme_admin": false, 00:12:51.322 "nvme_io": false, 00:12:51.322 "nvme_io_md": false, 00:12:51.322 "write_zeroes": true, 00:12:51.322 "zcopy": true, 00:12:51.322 "get_zone_info": false, 00:12:51.322 "zone_management": false, 00:12:51.322 "zone_append": false, 00:12:51.322 "compare": false, 00:12:51.322 "compare_and_write": false, 00:12:51.322 "abort": true, 00:12:51.322 "seek_hole": false, 00:12:51.322 "seek_data": false, 00:12:51.322 "copy": true, 00:12:51.322 "nvme_iov_md": false 00:12:51.322 }, 00:12:51.322 "memory_domains": [ 00:12:51.322 { 00:12:51.322 "dma_device_id": "system", 00:12:51.322 "dma_device_type": 1 00:12:51.322 }, 00:12:51.322 { 00:12:51.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.322 "dma_device_type": 2 00:12:51.322 } 00:12:51.322 ], 00:12:51.322 "driver_specific": {} 00:12:51.322 } 00:12:51.322 ] 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.322 18:15:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.580 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:51.580 "name": "Existed_Raid", 00:12:51.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.580 "strip_size_kb": 64, 00:12:51.580 "state": "configuring", 00:12:51.580 "raid_level": "concat", 00:12:51.580 "superblock": false, 00:12:51.580 "num_base_bdevs": 3, 00:12:51.580 "num_base_bdevs_discovered": 2, 00:12:51.580 "num_base_bdevs_operational": 3, 00:12:51.580 "base_bdevs_list": [ 00:12:51.580 { 00:12:51.580 "name": "BaseBdev1", 00:12:51.580 "uuid": "f5fce867-7d30-42b2-ad8e-a95be482ecb7", 00:12:51.580 "is_configured": true, 00:12:51.580 "data_offset": 0, 00:12:51.580 "data_size": 65536 00:12:51.580 }, 00:12:51.580 { 00:12:51.580 "name": null, 00:12:51.581 "uuid": "abf3034c-d0af-49ee-aee7-ab0a27d9a1da", 00:12:51.581 "is_configured": false, 00:12:51.581 "data_offset": 0, 00:12:51.581 "data_size": 65536 00:12:51.581 }, 00:12:51.581 { 00:12:51.581 "name": "BaseBdev3", 00:12:51.581 "uuid": "a8893636-01e8-483f-886e-2a5bc1cd89a4", 00:12:51.581 "is_configured": true, 00:12:51.581 "data_offset": 0, 00:12:51.581 "data_size": 65536 00:12:51.581 } 00:12:51.581 ] 00:12:51.581 }' 00:12:51.581 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:51.581 18:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.147 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.147 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:52.147 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:52.147 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:52.404 [2024-07-24 18:16:00.790895] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.404 "name": "Existed_Raid", 00:12:52.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.404 "strip_size_kb": 64, 00:12:52.404 "state": "configuring", 00:12:52.404 "raid_level": "concat", 00:12:52.404 "superblock": false, 00:12:52.404 "num_base_bdevs": 3, 00:12:52.404 "num_base_bdevs_discovered": 1, 00:12:52.404 "num_base_bdevs_operational": 3, 00:12:52.404 "base_bdevs_list": [ 00:12:52.404 { 00:12:52.404 "name": "BaseBdev1", 00:12:52.404 "uuid": "f5fce867-7d30-42b2-ad8e-a95be482ecb7", 00:12:52.404 "is_configured": true, 00:12:52.404 "data_offset": 0, 00:12:52.404 "data_size": 65536 00:12:52.404 }, 00:12:52.404 { 00:12:52.404 "name": null, 00:12:52.404 "uuid": "abf3034c-d0af-49ee-aee7-ab0a27d9a1da", 00:12:52.404 "is_configured": false, 00:12:52.404 "data_offset": 0, 00:12:52.404 "data_size": 65536 00:12:52.404 }, 00:12:52.404 { 00:12:52.404 "name": null, 00:12:52.404 "uuid": "a8893636-01e8-483f-886e-2a5bc1cd89a4", 00:12:52.404 "is_configured": false, 00:12:52.404 "data_offset": 0, 00:12:52.404 "data_size": 65536 00:12:52.404 } 00:12:52.404 ] 00:12:52.404 }' 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.404 18:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.019 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.019 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:53.019 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:53.019 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:53.290 [2024-07-24 18:16:01.733336] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.290 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:53.290 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:53.290 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:53.290 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:53.290 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:53.290 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:53.290 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:53.290 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:53.290 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:53.290 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:53.290 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.290 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.548 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:53.548 "name": "Existed_Raid", 00:12:53.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.548 "strip_size_kb": 64, 00:12:53.548 "state": "configuring", 00:12:53.548 "raid_level": "concat", 00:12:53.548 "superblock": false, 00:12:53.548 "num_base_bdevs": 3, 00:12:53.548 "num_base_bdevs_discovered": 2, 00:12:53.548 "num_base_bdevs_operational": 3, 00:12:53.548 "base_bdevs_list": [ 00:12:53.548 { 00:12:53.548 "name": "BaseBdev1", 00:12:53.548 "uuid": "f5fce867-7d30-42b2-ad8e-a95be482ecb7", 00:12:53.548 "is_configured": true, 00:12:53.548 "data_offset": 0, 00:12:53.548 "data_size": 65536 00:12:53.548 }, 00:12:53.548 { 00:12:53.548 "name": null, 00:12:53.548 "uuid": "abf3034c-d0af-49ee-aee7-ab0a27d9a1da", 00:12:53.548 "is_configured": false, 00:12:53.548 "data_offset": 0, 00:12:53.548 "data_size": 65536 00:12:53.548 }, 00:12:53.548 { 00:12:53.548 "name": "BaseBdev3", 00:12:53.548 "uuid": "a8893636-01e8-483f-886e-2a5bc1cd89a4", 00:12:53.548 "is_configured": true, 00:12:53.548 "data_offset": 0, 00:12:53.548 "data_size": 65536 00:12:53.548 } 00:12:53.548 ] 00:12:53.548 }' 00:12:53.548 18:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:53.548 18:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.807 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.807 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:54.065 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:54.065 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:54.326 [2024-07-24 18:16:02.691820] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.326 "name": "Existed_Raid", 00:12:54.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.326 "strip_size_kb": 64, 00:12:54.326 "state": "configuring", 00:12:54.326 "raid_level": "concat", 00:12:54.326 "superblock": false, 00:12:54.326 "num_base_bdevs": 3, 00:12:54.326 "num_base_bdevs_discovered": 1, 00:12:54.326 "num_base_bdevs_operational": 3, 00:12:54.326 "base_bdevs_list": [ 00:12:54.326 { 00:12:54.326 "name": null, 00:12:54.326 "uuid": "f5fce867-7d30-42b2-ad8e-a95be482ecb7", 00:12:54.326 "is_configured": false, 00:12:54.326 "data_offset": 0, 00:12:54.326 "data_size": 65536 00:12:54.326 }, 00:12:54.326 { 00:12:54.326 "name": null, 00:12:54.326 "uuid": "abf3034c-d0af-49ee-aee7-ab0a27d9a1da", 00:12:54.326 "is_configured": false, 00:12:54.326 "data_offset": 0, 00:12:54.326 "data_size": 65536 00:12:54.326 }, 00:12:54.326 { 00:12:54.326 "name": "BaseBdev3", 00:12:54.326 "uuid": "a8893636-01e8-483f-886e-2a5bc1cd89a4", 00:12:54.326 "is_configured": true, 00:12:54.326 "data_offset": 0, 00:12:54.326 "data_size": 65536 00:12:54.326 } 00:12:54.326 ] 00:12:54.326 }' 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.326 18:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.894 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.894 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:55.152 [2024-07-24 18:16:03.692126] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.152 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.408 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:55.408 "name": "Existed_Raid", 00:12:55.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.408 "strip_size_kb": 64, 00:12:55.408 "state": "configuring", 00:12:55.408 "raid_level": "concat", 00:12:55.408 "superblock": false, 00:12:55.408 "num_base_bdevs": 3, 00:12:55.408 "num_base_bdevs_discovered": 2, 00:12:55.408 "num_base_bdevs_operational": 3, 00:12:55.408 "base_bdevs_list": [ 00:12:55.408 { 00:12:55.408 "name": null, 00:12:55.408 "uuid": "f5fce867-7d30-42b2-ad8e-a95be482ecb7", 00:12:55.408 "is_configured": false, 00:12:55.408 "data_offset": 0, 00:12:55.408 "data_size": 65536 00:12:55.408 }, 00:12:55.408 { 00:12:55.408 "name": "BaseBdev2", 00:12:55.408 "uuid": "abf3034c-d0af-49ee-aee7-ab0a27d9a1da", 00:12:55.408 "is_configured": true, 00:12:55.408 "data_offset": 0, 00:12:55.408 "data_size": 65536 00:12:55.408 }, 00:12:55.408 { 00:12:55.408 "name": "BaseBdev3", 00:12:55.408 "uuid": "a8893636-01e8-483f-886e-2a5bc1cd89a4", 00:12:55.408 "is_configured": true, 00:12:55.408 "data_offset": 0, 00:12:55.408 "data_size": 65536 00:12:55.408 } 00:12:55.408 ] 00:12:55.408 }' 00:12:55.408 18:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:55.408 18:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.974 18:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:55.974 18:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.974 18:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:55.974 18:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.974 18:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:56.233 18:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f5fce867-7d30-42b2-ad8e-a95be482ecb7 00:12:56.492 [2024-07-24 18:16:04.853869] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:56.492 [2024-07-24 18:16:04.853897] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x125bd70 00:12:56.492 [2024-07-24 18:16:04.853902] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:56.492 [2024-07-24 18:16:04.854031] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x125b190 00:12:56.492 [2024-07-24 18:16:04.854116] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x125bd70 00:12:56.492 [2024-07-24 18:16:04.854123] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x125bd70 00:12:56.492 [2024-07-24 18:16:04.854239] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.492 NewBaseBdev 00:12:56.492 18:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:56.492 18:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:56.492 18:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.492 18:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:56.492 18:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.492 18:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.492 18:16:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:56.492 18:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:56.751 [ 00:12:56.751 { 00:12:56.751 "name": "NewBaseBdev", 00:12:56.751 "aliases": [ 00:12:56.751 "f5fce867-7d30-42b2-ad8e-a95be482ecb7" 00:12:56.751 ], 00:12:56.751 "product_name": "Malloc disk", 00:12:56.751 "block_size": 512, 00:12:56.751 "num_blocks": 65536, 00:12:56.751 "uuid": "f5fce867-7d30-42b2-ad8e-a95be482ecb7", 00:12:56.751 "assigned_rate_limits": { 00:12:56.751 "rw_ios_per_sec": 0, 00:12:56.751 "rw_mbytes_per_sec": 0, 00:12:56.751 "r_mbytes_per_sec": 0, 00:12:56.751 "w_mbytes_per_sec": 0 00:12:56.751 }, 00:12:56.751 "claimed": true, 00:12:56.751 "claim_type": "exclusive_write", 00:12:56.751 "zoned": false, 00:12:56.751 "supported_io_types": { 00:12:56.751 "read": true, 00:12:56.751 "write": true, 00:12:56.751 "unmap": true, 00:12:56.751 "flush": true, 00:12:56.751 "reset": true, 00:12:56.751 "nvme_admin": false, 00:12:56.751 "nvme_io": false, 00:12:56.751 "nvme_io_md": false, 00:12:56.751 "write_zeroes": true, 00:12:56.751 "zcopy": true, 00:12:56.751 "get_zone_info": false, 00:12:56.751 "zone_management": false, 00:12:56.751 "zone_append": false, 00:12:56.751 "compare": false, 00:12:56.751 "compare_and_write": false, 00:12:56.751 "abort": true, 00:12:56.751 "seek_hole": false, 00:12:56.751 "seek_data": false, 00:12:56.751 "copy": true, 00:12:56.751 "nvme_iov_md": false 00:12:56.751 }, 00:12:56.751 "memory_domains": [ 00:12:56.751 { 00:12:56.751 "dma_device_id": "system", 00:12:56.751 "dma_device_type": 1 00:12:56.751 }, 00:12:56.751 { 00:12:56.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.751 "dma_device_type": 2 00:12:56.751 } 00:12:56.751 ], 00:12:56.751 "driver_specific": {} 00:12:56.751 } 00:12:56.751 ] 00:12:56.751 18:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:56.751 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:56.751 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:56.751 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:56.751 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:56.751 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:56.751 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:56.751 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:56.751 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:56.751 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:56.751 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:56.751 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.751 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.009 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:57.009 "name": "Existed_Raid", 00:12:57.009 "uuid": "8ba899f9-cb7c-4b1a-8ea3-93cdff96662b", 00:12:57.009 "strip_size_kb": 64, 00:12:57.009 "state": "online", 00:12:57.009 "raid_level": "concat", 00:12:57.009 "superblock": false, 00:12:57.009 "num_base_bdevs": 3, 00:12:57.009 "num_base_bdevs_discovered": 3, 00:12:57.009 "num_base_bdevs_operational": 3, 00:12:57.009 "base_bdevs_list": [ 00:12:57.009 { 00:12:57.009 "name": "NewBaseBdev", 00:12:57.009 "uuid": "f5fce867-7d30-42b2-ad8e-a95be482ecb7", 00:12:57.009 "is_configured": true, 00:12:57.009 "data_offset": 0, 00:12:57.009 "data_size": 65536 00:12:57.009 }, 00:12:57.009 { 00:12:57.009 "name": "BaseBdev2", 00:12:57.009 "uuid": "abf3034c-d0af-49ee-aee7-ab0a27d9a1da", 00:12:57.009 "is_configured": true, 00:12:57.009 "data_offset": 0, 00:12:57.009 "data_size": 65536 00:12:57.009 }, 00:12:57.009 { 00:12:57.009 "name": "BaseBdev3", 00:12:57.009 "uuid": "a8893636-01e8-483f-886e-2a5bc1cd89a4", 00:12:57.009 "is_configured": true, 00:12:57.009 "data_offset": 0, 00:12:57.009 "data_size": 65536 00:12:57.009 } 00:12:57.009 ] 00:12:57.009 }' 00:12:57.009 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:57.009 18:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.268 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.268 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:57.268 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:57.268 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:57.268 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:57.268 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:57.268 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:57.268 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:57.527 [2024-07-24 18:16:05.952910] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.527 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:57.527 "name": "Existed_Raid", 00:12:57.527 "aliases": [ 00:12:57.527 "8ba899f9-cb7c-4b1a-8ea3-93cdff96662b" 00:12:57.527 ], 00:12:57.527 "product_name": "Raid Volume", 00:12:57.527 "block_size": 512, 00:12:57.527 "num_blocks": 196608, 00:12:57.527 "uuid": "8ba899f9-cb7c-4b1a-8ea3-93cdff96662b", 00:12:57.527 "assigned_rate_limits": { 00:12:57.527 "rw_ios_per_sec": 0, 00:12:57.527 "rw_mbytes_per_sec": 0, 00:12:57.527 "r_mbytes_per_sec": 0, 00:12:57.527 "w_mbytes_per_sec": 0 00:12:57.527 }, 00:12:57.527 "claimed": false, 00:12:57.527 "zoned": false, 00:12:57.527 "supported_io_types": { 00:12:57.527 "read": true, 00:12:57.527 "write": true, 00:12:57.527 "unmap": true, 00:12:57.527 "flush": true, 00:12:57.527 "reset": true, 00:12:57.527 "nvme_admin": false, 00:12:57.527 "nvme_io": false, 00:12:57.527 "nvme_io_md": false, 00:12:57.527 "write_zeroes": true, 00:12:57.527 "zcopy": false, 00:12:57.527 "get_zone_info": false, 00:12:57.527 "zone_management": false, 00:12:57.527 "zone_append": false, 00:12:57.527 "compare": false, 00:12:57.527 "compare_and_write": false, 00:12:57.527 "abort": false, 00:12:57.527 "seek_hole": false, 00:12:57.527 "seek_data": false, 00:12:57.527 "copy": false, 00:12:57.527 "nvme_iov_md": false 00:12:57.527 }, 00:12:57.527 "memory_domains": [ 00:12:57.527 { 00:12:57.527 "dma_device_id": "system", 00:12:57.527 "dma_device_type": 1 00:12:57.527 }, 00:12:57.527 { 00:12:57.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.527 "dma_device_type": 2 00:12:57.527 }, 00:12:57.527 { 00:12:57.527 "dma_device_id": "system", 00:12:57.527 "dma_device_type": 1 00:12:57.527 }, 00:12:57.527 { 00:12:57.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.527 "dma_device_type": 2 00:12:57.527 }, 00:12:57.527 { 00:12:57.527 "dma_device_id": "system", 00:12:57.527 "dma_device_type": 1 00:12:57.527 }, 00:12:57.527 { 00:12:57.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.527 "dma_device_type": 2 00:12:57.527 } 00:12:57.527 ], 00:12:57.527 "driver_specific": { 00:12:57.527 "raid": { 00:12:57.527 "uuid": "8ba899f9-cb7c-4b1a-8ea3-93cdff96662b", 00:12:57.527 "strip_size_kb": 64, 00:12:57.527 "state": "online", 00:12:57.527 "raid_level": "concat", 00:12:57.527 "superblock": false, 00:12:57.527 "num_base_bdevs": 3, 00:12:57.527 "num_base_bdevs_discovered": 3, 00:12:57.527 "num_base_bdevs_operational": 3, 00:12:57.527 "base_bdevs_list": [ 00:12:57.527 { 00:12:57.527 "name": "NewBaseBdev", 00:12:57.527 "uuid": "f5fce867-7d30-42b2-ad8e-a95be482ecb7", 00:12:57.527 "is_configured": true, 00:12:57.527 "data_offset": 0, 00:12:57.527 "data_size": 65536 00:12:57.527 }, 00:12:57.527 { 00:12:57.527 "name": "BaseBdev2", 00:12:57.527 "uuid": "abf3034c-d0af-49ee-aee7-ab0a27d9a1da", 00:12:57.527 "is_configured": true, 00:12:57.527 "data_offset": 0, 00:12:57.527 "data_size": 65536 00:12:57.527 }, 00:12:57.527 { 00:12:57.527 "name": "BaseBdev3", 00:12:57.527 "uuid": "a8893636-01e8-483f-886e-2a5bc1cd89a4", 00:12:57.527 "is_configured": true, 00:12:57.527 "data_offset": 0, 00:12:57.527 "data_size": 65536 00:12:57.527 } 00:12:57.527 ] 00:12:57.527 } 00:12:57.527 } 00:12:57.527 }' 00:12:57.527 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.527 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:57.527 BaseBdev2 00:12:57.527 BaseBdev3' 00:12:57.527 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:57.527 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:57.527 18:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:57.786 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:57.786 "name": "NewBaseBdev", 00:12:57.786 "aliases": [ 00:12:57.786 "f5fce867-7d30-42b2-ad8e-a95be482ecb7" 00:12:57.786 ], 00:12:57.786 "product_name": "Malloc disk", 00:12:57.786 "block_size": 512, 00:12:57.786 "num_blocks": 65536, 00:12:57.786 "uuid": "f5fce867-7d30-42b2-ad8e-a95be482ecb7", 00:12:57.786 "assigned_rate_limits": { 00:12:57.786 "rw_ios_per_sec": 0, 00:12:57.786 "rw_mbytes_per_sec": 0, 00:12:57.786 "r_mbytes_per_sec": 0, 00:12:57.786 "w_mbytes_per_sec": 0 00:12:57.786 }, 00:12:57.786 "claimed": true, 00:12:57.786 "claim_type": "exclusive_write", 00:12:57.786 "zoned": false, 00:12:57.786 "supported_io_types": { 00:12:57.786 "read": true, 00:12:57.786 "write": true, 00:12:57.786 "unmap": true, 00:12:57.786 "flush": true, 00:12:57.786 "reset": true, 00:12:57.786 "nvme_admin": false, 00:12:57.786 "nvme_io": false, 00:12:57.786 "nvme_io_md": false, 00:12:57.786 "write_zeroes": true, 00:12:57.786 "zcopy": true, 00:12:57.786 "get_zone_info": false, 00:12:57.786 "zone_management": false, 00:12:57.786 "zone_append": false, 00:12:57.786 "compare": false, 00:12:57.786 "compare_and_write": false, 00:12:57.786 "abort": true, 00:12:57.786 "seek_hole": false, 00:12:57.786 "seek_data": false, 00:12:57.786 "copy": true, 00:12:57.786 "nvme_iov_md": false 00:12:57.786 }, 00:12:57.786 "memory_domains": [ 00:12:57.786 { 00:12:57.786 "dma_device_id": "system", 00:12:57.786 "dma_device_type": 1 00:12:57.786 }, 00:12:57.786 { 00:12:57.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.786 "dma_device_type": 2 00:12:57.786 } 00:12:57.786 ], 00:12:57.786 "driver_specific": {} 00:12:57.786 }' 00:12:57.786 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:57.786 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:57.786 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:57.786 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:57.786 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:57.786 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:57.786 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:57.786 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:57.786 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:57.786 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.044 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.044 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.045 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.045 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:58.045 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.045 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.045 "name": "BaseBdev2", 00:12:58.045 "aliases": [ 00:12:58.045 "abf3034c-d0af-49ee-aee7-ab0a27d9a1da" 00:12:58.045 ], 00:12:58.045 "product_name": "Malloc disk", 00:12:58.045 "block_size": 512, 00:12:58.045 "num_blocks": 65536, 00:12:58.045 "uuid": "abf3034c-d0af-49ee-aee7-ab0a27d9a1da", 00:12:58.045 "assigned_rate_limits": { 00:12:58.045 "rw_ios_per_sec": 0, 00:12:58.045 "rw_mbytes_per_sec": 0, 00:12:58.045 "r_mbytes_per_sec": 0, 00:12:58.045 "w_mbytes_per_sec": 0 00:12:58.045 }, 00:12:58.045 "claimed": true, 00:12:58.045 "claim_type": "exclusive_write", 00:12:58.045 "zoned": false, 00:12:58.045 "supported_io_types": { 00:12:58.045 "read": true, 00:12:58.045 "write": true, 00:12:58.045 "unmap": true, 00:12:58.045 "flush": true, 00:12:58.045 "reset": true, 00:12:58.045 "nvme_admin": false, 00:12:58.045 "nvme_io": false, 00:12:58.045 "nvme_io_md": false, 00:12:58.045 "write_zeroes": true, 00:12:58.045 "zcopy": true, 00:12:58.045 "get_zone_info": false, 00:12:58.045 "zone_management": false, 00:12:58.045 "zone_append": false, 00:12:58.045 "compare": false, 00:12:58.045 "compare_and_write": false, 00:12:58.045 "abort": true, 00:12:58.045 "seek_hole": false, 00:12:58.045 "seek_data": false, 00:12:58.045 "copy": true, 00:12:58.045 "nvme_iov_md": false 00:12:58.045 }, 00:12:58.045 "memory_domains": [ 00:12:58.045 { 00:12:58.045 "dma_device_id": "system", 00:12:58.045 "dma_device_type": 1 00:12:58.045 }, 00:12:58.045 { 00:12:58.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.045 "dma_device_type": 2 00:12:58.045 } 00:12:58.045 ], 00:12:58.045 "driver_specific": {} 00:12:58.045 }' 00:12:58.045 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.045 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.045 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.045 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.303 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.303 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.303 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.303 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.303 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.303 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.303 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.303 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.303 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.303 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:58.303 18:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.561 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.561 "name": "BaseBdev3", 00:12:58.561 "aliases": [ 00:12:58.561 "a8893636-01e8-483f-886e-2a5bc1cd89a4" 00:12:58.561 ], 00:12:58.562 "product_name": "Malloc disk", 00:12:58.562 "block_size": 512, 00:12:58.562 "num_blocks": 65536, 00:12:58.562 "uuid": "a8893636-01e8-483f-886e-2a5bc1cd89a4", 00:12:58.562 "assigned_rate_limits": { 00:12:58.562 "rw_ios_per_sec": 0, 00:12:58.562 "rw_mbytes_per_sec": 0, 00:12:58.562 "r_mbytes_per_sec": 0, 00:12:58.562 "w_mbytes_per_sec": 0 00:12:58.562 }, 00:12:58.562 "claimed": true, 00:12:58.562 "claim_type": "exclusive_write", 00:12:58.562 "zoned": false, 00:12:58.562 "supported_io_types": { 00:12:58.562 "read": true, 00:12:58.562 "write": true, 00:12:58.562 "unmap": true, 00:12:58.562 "flush": true, 00:12:58.562 "reset": true, 00:12:58.562 "nvme_admin": false, 00:12:58.562 "nvme_io": false, 00:12:58.562 "nvme_io_md": false, 00:12:58.562 "write_zeroes": true, 00:12:58.562 "zcopy": true, 00:12:58.562 "get_zone_info": false, 00:12:58.562 "zone_management": false, 00:12:58.562 "zone_append": false, 00:12:58.562 "compare": false, 00:12:58.562 "compare_and_write": false, 00:12:58.562 "abort": true, 00:12:58.562 "seek_hole": false, 00:12:58.562 "seek_data": false, 00:12:58.562 "copy": true, 00:12:58.562 "nvme_iov_md": false 00:12:58.562 }, 00:12:58.562 "memory_domains": [ 00:12:58.562 { 00:12:58.562 "dma_device_id": "system", 00:12:58.562 "dma_device_type": 1 00:12:58.562 }, 00:12:58.562 { 00:12:58.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.562 "dma_device_type": 2 00:12:58.562 } 00:12:58.562 ], 00:12:58.562 "driver_specific": {} 00:12:58.562 }' 00:12:58.562 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.562 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.562 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.562 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.562 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.562 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.562 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.820 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.820 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.820 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.820 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.820 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.820 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:59.079 [2024-07-24 18:16:07.424658] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.079 [2024-07-24 18:16:07.424679] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.079 [2024-07-24 18:16:07.424716] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.079 [2024-07-24 18:16:07.424753] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.079 [2024-07-24 18:16:07.424760] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x125bd70 name Existed_Raid, state offline 00:12:59.079 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2184773 00:12:59.079 18:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2184773 ']' 00:12:59.079 18:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2184773 00:12:59.079 18:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:59.079 18:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.079 18:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2184773 00:12:59.079 18:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.079 18:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.079 18:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2184773' 00:12:59.079 killing process with pid 2184773 00:12:59.079 18:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2184773 00:12:59.079 [2024-07-24 18:16:07.491236] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.079 18:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2184773 00:12:59.079 [2024-07-24 18:16:07.514287] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.338 18:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:59.338 00:12:59.338 real 0m21.088s 00:12:59.338 user 0m38.453s 00:12:59.338 sys 0m4.117s 00:12:59.338 18:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.338 18:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.338 ************************************ 00:12:59.338 END TEST raid_state_function_test 00:12:59.338 ************************************ 00:12:59.338 18:16:07 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:59.338 18:16:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:59.338 18:16:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.338 18:16:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.338 ************************************ 00:12:59.338 START TEST raid_state_function_test_sb 00:12:59.338 ************************************ 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2189116 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2189116' 00:12:59.339 Process raid pid: 2189116 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2189116 /var/tmp/spdk-raid.sock 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2189116 ']' 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:59.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.339 18:16:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.339 [2024-07-24 18:16:07.829574] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:12:59.339 [2024-07-24 18:16:07.829617] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:01.0 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:01.1 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:01.2 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:01.3 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:01.4 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:01.5 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:01.6 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:01.7 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:02.0 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:02.1 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:02.2 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:02.3 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:02.4 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:02.5 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:02.6 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b3:02.7 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:01.0 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:01.1 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:01.2 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:01.3 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:01.4 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:01.5 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:01.6 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:01.7 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:02.0 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:02.1 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:02.2 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:02.3 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:02.4 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:02.5 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:02.6 cannot be used 00:12:59.339 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:59.339 EAL: Requested device 0000:b5:02.7 cannot be used 00:12:59.339 [2024-07-24 18:16:07.920156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.598 [2024-07-24 18:16:07.995017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.598 [2024-07-24 18:16:08.044436] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.598 [2024-07-24 18:16:08.044459] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.164 18:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.164 18:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:00.164 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:00.423 [2024-07-24 18:16:08.771338] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:00.423 [2024-07-24 18:16:08.771366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:00.423 [2024-07-24 18:16:08.771372] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:00.423 [2024-07-24 18:16:08.771379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:00.423 [2024-07-24 18:16:08.771384] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:00.423 [2024-07-24 18:16:08.771408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:00.423 "name": "Existed_Raid", 00:13:00.423 "uuid": "c7f1f759-ab33-4864-856a-770b19df53ae", 00:13:00.423 "strip_size_kb": 64, 00:13:00.423 "state": "configuring", 00:13:00.423 "raid_level": "concat", 00:13:00.423 "superblock": true, 00:13:00.423 "num_base_bdevs": 3, 00:13:00.423 "num_base_bdevs_discovered": 0, 00:13:00.423 "num_base_bdevs_operational": 3, 00:13:00.423 "base_bdevs_list": [ 00:13:00.423 { 00:13:00.423 "name": "BaseBdev1", 00:13:00.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.423 "is_configured": false, 00:13:00.423 "data_offset": 0, 00:13:00.423 "data_size": 0 00:13:00.423 }, 00:13:00.423 { 00:13:00.423 "name": "BaseBdev2", 00:13:00.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.423 "is_configured": false, 00:13:00.423 "data_offset": 0, 00:13:00.423 "data_size": 0 00:13:00.423 }, 00:13:00.423 { 00:13:00.423 "name": "BaseBdev3", 00:13:00.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.423 "is_configured": false, 00:13:00.423 "data_offset": 0, 00:13:00.423 "data_size": 0 00:13:00.423 } 00:13:00.423 ] 00:13:00.423 }' 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:00.423 18:16:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.990 18:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:00.990 [2024-07-24 18:16:09.549241] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:00.990 [2024-07-24 18:16:09.549259] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10871c0 name Existed_Raid, state configuring 00:13:00.990 18:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:01.249 [2024-07-24 18:16:09.721700] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:01.249 [2024-07-24 18:16:09.721717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:01.249 [2024-07-24 18:16:09.721722] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.249 [2024-07-24 18:16:09.721729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.249 [2024-07-24 18:16:09.721734] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.249 [2024-07-24 18:16:09.721757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.249 18:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:01.507 [2024-07-24 18:16:09.902638] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.507 BaseBdev1 00:13:01.507 18:16:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:01.507 18:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:01.507 18:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:01.507 18:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:01.507 18:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:01.507 18:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:01.507 18:16:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:01.507 18:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:01.765 [ 00:13:01.765 { 00:13:01.765 "name": "BaseBdev1", 00:13:01.765 "aliases": [ 00:13:01.765 "803c63b0-2584-4f59-801b-01db63e1ffc6" 00:13:01.765 ], 00:13:01.765 "product_name": "Malloc disk", 00:13:01.765 "block_size": 512, 00:13:01.765 "num_blocks": 65536, 00:13:01.765 "uuid": "803c63b0-2584-4f59-801b-01db63e1ffc6", 00:13:01.765 "assigned_rate_limits": { 00:13:01.765 "rw_ios_per_sec": 0, 00:13:01.765 "rw_mbytes_per_sec": 0, 00:13:01.765 "r_mbytes_per_sec": 0, 00:13:01.765 "w_mbytes_per_sec": 0 00:13:01.765 }, 00:13:01.765 "claimed": true, 00:13:01.765 "claim_type": "exclusive_write", 00:13:01.765 "zoned": false, 00:13:01.765 "supported_io_types": { 00:13:01.765 "read": true, 00:13:01.765 "write": true, 00:13:01.765 "unmap": true, 00:13:01.765 "flush": true, 00:13:01.765 "reset": true, 00:13:01.765 "nvme_admin": false, 00:13:01.765 "nvme_io": false, 00:13:01.765 "nvme_io_md": false, 00:13:01.765 "write_zeroes": true, 00:13:01.765 "zcopy": true, 00:13:01.765 "get_zone_info": false, 00:13:01.765 "zone_management": false, 00:13:01.765 "zone_append": false, 00:13:01.765 "compare": false, 00:13:01.765 "compare_and_write": false, 00:13:01.765 "abort": true, 00:13:01.765 "seek_hole": false, 00:13:01.765 "seek_data": false, 00:13:01.765 "copy": true, 00:13:01.765 "nvme_iov_md": false 00:13:01.765 }, 00:13:01.765 "memory_domains": [ 00:13:01.765 { 00:13:01.765 "dma_device_id": "system", 00:13:01.765 "dma_device_type": 1 00:13:01.765 }, 00:13:01.765 { 00:13:01.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.765 "dma_device_type": 2 00:13:01.765 } 00:13:01.765 ], 00:13:01.765 "driver_specific": {} 00:13:01.765 } 00:13:01.765 ] 00:13:01.765 18:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:01.765 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:01.765 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:01.765 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:01.765 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:01.765 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:01.765 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:01.765 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:01.765 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:01.765 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:01.765 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:01.765 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.766 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.024 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:02.024 "name": "Existed_Raid", 00:13:02.024 "uuid": "0acd164a-af1a-4e61-984c-0e62c9fb8715", 00:13:02.024 "strip_size_kb": 64, 00:13:02.024 "state": "configuring", 00:13:02.024 "raid_level": "concat", 00:13:02.024 "superblock": true, 00:13:02.024 "num_base_bdevs": 3, 00:13:02.024 "num_base_bdevs_discovered": 1, 00:13:02.024 "num_base_bdevs_operational": 3, 00:13:02.024 "base_bdevs_list": [ 00:13:02.024 { 00:13:02.024 "name": "BaseBdev1", 00:13:02.024 "uuid": "803c63b0-2584-4f59-801b-01db63e1ffc6", 00:13:02.024 "is_configured": true, 00:13:02.024 "data_offset": 2048, 00:13:02.024 "data_size": 63488 00:13:02.024 }, 00:13:02.024 { 00:13:02.024 "name": "BaseBdev2", 00:13:02.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.024 "is_configured": false, 00:13:02.024 "data_offset": 0, 00:13:02.024 "data_size": 0 00:13:02.024 }, 00:13:02.024 { 00:13:02.024 "name": "BaseBdev3", 00:13:02.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.024 "is_configured": false, 00:13:02.024 "data_offset": 0, 00:13:02.024 "data_size": 0 00:13:02.024 } 00:13:02.024 ] 00:13:02.024 }' 00:13:02.024 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:02.024 18:16:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.591 18:16:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:02.591 [2024-07-24 18:16:11.073648] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:02.591 [2024-07-24 18:16:11.073692] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1086a90 name Existed_Raid, state configuring 00:13:02.591 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:02.849 [2024-07-24 18:16:11.246119] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.849 [2024-07-24 18:16:11.247165] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:02.849 [2024-07-24 18:16:11.247188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:02.849 [2024-07-24 18:16:11.247194] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:02.849 [2024-07-24 18:16:11.247202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:02.849 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:02.849 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:02.849 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:02.849 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:02.849 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:02.849 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:02.849 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:02.849 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:02.850 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.850 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.850 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.850 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.850 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.850 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.850 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:02.850 "name": "Existed_Raid", 00:13:02.850 "uuid": "14f021d7-037e-4cd3-82ba-f08daca64451", 00:13:02.850 "strip_size_kb": 64, 00:13:02.850 "state": "configuring", 00:13:02.850 "raid_level": "concat", 00:13:02.850 "superblock": true, 00:13:02.850 "num_base_bdevs": 3, 00:13:02.850 "num_base_bdevs_discovered": 1, 00:13:02.850 "num_base_bdevs_operational": 3, 00:13:02.850 "base_bdevs_list": [ 00:13:02.850 { 00:13:02.850 "name": "BaseBdev1", 00:13:02.850 "uuid": "803c63b0-2584-4f59-801b-01db63e1ffc6", 00:13:02.850 "is_configured": true, 00:13:02.850 "data_offset": 2048, 00:13:02.850 "data_size": 63488 00:13:02.850 }, 00:13:02.850 { 00:13:02.850 "name": "BaseBdev2", 00:13:02.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.850 "is_configured": false, 00:13:02.850 "data_offset": 0, 00:13:02.850 "data_size": 0 00:13:02.850 }, 00:13:02.850 { 00:13:02.850 "name": "BaseBdev3", 00:13:02.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.850 "is_configured": false, 00:13:02.850 "data_offset": 0, 00:13:02.850 "data_size": 0 00:13:02.850 } 00:13:02.850 ] 00:13:02.850 }' 00:13:02.850 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:02.850 18:16:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.416 18:16:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:03.674 [2024-07-24 18:16:12.026856] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.674 BaseBdev2 00:13:03.674 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:03.674 18:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:03.674 18:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:03.674 18:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:03.674 18:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:03.674 18:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:03.674 18:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:03.674 18:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:03.932 [ 00:13:03.932 { 00:13:03.932 "name": "BaseBdev2", 00:13:03.932 "aliases": [ 00:13:03.932 "a362397c-a112-48d2-9944-51ad26890792" 00:13:03.932 ], 00:13:03.932 "product_name": "Malloc disk", 00:13:03.932 "block_size": 512, 00:13:03.932 "num_blocks": 65536, 00:13:03.932 "uuid": "a362397c-a112-48d2-9944-51ad26890792", 00:13:03.932 "assigned_rate_limits": { 00:13:03.932 "rw_ios_per_sec": 0, 00:13:03.932 "rw_mbytes_per_sec": 0, 00:13:03.932 "r_mbytes_per_sec": 0, 00:13:03.932 "w_mbytes_per_sec": 0 00:13:03.932 }, 00:13:03.932 "claimed": true, 00:13:03.932 "claim_type": "exclusive_write", 00:13:03.932 "zoned": false, 00:13:03.932 "supported_io_types": { 00:13:03.932 "read": true, 00:13:03.932 "write": true, 00:13:03.932 "unmap": true, 00:13:03.932 "flush": true, 00:13:03.932 "reset": true, 00:13:03.932 "nvme_admin": false, 00:13:03.932 "nvme_io": false, 00:13:03.932 "nvme_io_md": false, 00:13:03.932 "write_zeroes": true, 00:13:03.932 "zcopy": true, 00:13:03.932 "get_zone_info": false, 00:13:03.932 "zone_management": false, 00:13:03.932 "zone_append": false, 00:13:03.932 "compare": false, 00:13:03.932 "compare_and_write": false, 00:13:03.932 "abort": true, 00:13:03.932 "seek_hole": false, 00:13:03.933 "seek_data": false, 00:13:03.933 "copy": true, 00:13:03.933 "nvme_iov_md": false 00:13:03.933 }, 00:13:03.933 "memory_domains": [ 00:13:03.933 { 00:13:03.933 "dma_device_id": "system", 00:13:03.933 "dma_device_type": 1 00:13:03.933 }, 00:13:03.933 { 00:13:03.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.933 "dma_device_type": 2 00:13:03.933 } 00:13:03.933 ], 00:13:03.933 "driver_specific": {} 00:13:03.933 } 00:13:03.933 ] 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:03.933 "name": "Existed_Raid", 00:13:03.933 "uuid": "14f021d7-037e-4cd3-82ba-f08daca64451", 00:13:03.933 "strip_size_kb": 64, 00:13:03.933 "state": "configuring", 00:13:03.933 "raid_level": "concat", 00:13:03.933 "superblock": true, 00:13:03.933 "num_base_bdevs": 3, 00:13:03.933 "num_base_bdevs_discovered": 2, 00:13:03.933 "num_base_bdevs_operational": 3, 00:13:03.933 "base_bdevs_list": [ 00:13:03.933 { 00:13:03.933 "name": "BaseBdev1", 00:13:03.933 "uuid": "803c63b0-2584-4f59-801b-01db63e1ffc6", 00:13:03.933 "is_configured": true, 00:13:03.933 "data_offset": 2048, 00:13:03.933 "data_size": 63488 00:13:03.933 }, 00:13:03.933 { 00:13:03.933 "name": "BaseBdev2", 00:13:03.933 "uuid": "a362397c-a112-48d2-9944-51ad26890792", 00:13:03.933 "is_configured": true, 00:13:03.933 "data_offset": 2048, 00:13:03.933 "data_size": 63488 00:13:03.933 }, 00:13:03.933 { 00:13:03.933 "name": "BaseBdev3", 00:13:03.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.933 "is_configured": false, 00:13:03.933 "data_offset": 0, 00:13:03.933 "data_size": 0 00:13:03.933 } 00:13:03.933 ] 00:13:03.933 }' 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:03.933 18:16:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.499 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:04.757 [2024-07-24 18:16:13.156652] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.757 [2024-07-24 18:16:13.156778] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1087980 00:13:04.757 [2024-07-24 18:16:13.156788] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:04.757 [2024-07-24 18:16:13.156906] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1087650 00:13:04.757 [2024-07-24 18:16:13.156988] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1087980 00:13:04.757 [2024-07-24 18:16:13.156995] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1087980 00:13:04.757 [2024-07-24 18:16:13.157056] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.757 BaseBdev3 00:13:04.757 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:04.757 18:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:04.757 18:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:04.757 18:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:04.757 18:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:04.757 18:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:04.757 18:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:04.757 18:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:05.015 [ 00:13:05.015 { 00:13:05.015 "name": "BaseBdev3", 00:13:05.015 "aliases": [ 00:13:05.015 "cab00b5f-5712-476e-a520-812d8bf4f3a6" 00:13:05.015 ], 00:13:05.015 "product_name": "Malloc disk", 00:13:05.015 "block_size": 512, 00:13:05.015 "num_blocks": 65536, 00:13:05.015 "uuid": "cab00b5f-5712-476e-a520-812d8bf4f3a6", 00:13:05.015 "assigned_rate_limits": { 00:13:05.015 "rw_ios_per_sec": 0, 00:13:05.015 "rw_mbytes_per_sec": 0, 00:13:05.015 "r_mbytes_per_sec": 0, 00:13:05.015 "w_mbytes_per_sec": 0 00:13:05.015 }, 00:13:05.015 "claimed": true, 00:13:05.015 "claim_type": "exclusive_write", 00:13:05.015 "zoned": false, 00:13:05.015 "supported_io_types": { 00:13:05.015 "read": true, 00:13:05.015 "write": true, 00:13:05.015 "unmap": true, 00:13:05.015 "flush": true, 00:13:05.015 "reset": true, 00:13:05.015 "nvme_admin": false, 00:13:05.015 "nvme_io": false, 00:13:05.015 "nvme_io_md": false, 00:13:05.015 "write_zeroes": true, 00:13:05.015 "zcopy": true, 00:13:05.015 "get_zone_info": false, 00:13:05.015 "zone_management": false, 00:13:05.015 "zone_append": false, 00:13:05.015 "compare": false, 00:13:05.015 "compare_and_write": false, 00:13:05.015 "abort": true, 00:13:05.015 "seek_hole": false, 00:13:05.015 "seek_data": false, 00:13:05.015 "copy": true, 00:13:05.015 "nvme_iov_md": false 00:13:05.015 }, 00:13:05.015 "memory_domains": [ 00:13:05.015 { 00:13:05.015 "dma_device_id": "system", 00:13:05.015 "dma_device_type": 1 00:13:05.015 }, 00:13:05.015 { 00:13:05.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.015 "dma_device_type": 2 00:13:05.015 } 00:13:05.015 ], 00:13:05.015 "driver_specific": {} 00:13:05.015 } 00:13:05.015 ] 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.015 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.272 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:05.273 "name": "Existed_Raid", 00:13:05.273 "uuid": "14f021d7-037e-4cd3-82ba-f08daca64451", 00:13:05.273 "strip_size_kb": 64, 00:13:05.273 "state": "online", 00:13:05.273 "raid_level": "concat", 00:13:05.273 "superblock": true, 00:13:05.273 "num_base_bdevs": 3, 00:13:05.273 "num_base_bdevs_discovered": 3, 00:13:05.273 "num_base_bdevs_operational": 3, 00:13:05.273 "base_bdevs_list": [ 00:13:05.273 { 00:13:05.273 "name": "BaseBdev1", 00:13:05.273 "uuid": "803c63b0-2584-4f59-801b-01db63e1ffc6", 00:13:05.273 "is_configured": true, 00:13:05.273 "data_offset": 2048, 00:13:05.273 "data_size": 63488 00:13:05.273 }, 00:13:05.273 { 00:13:05.273 "name": "BaseBdev2", 00:13:05.273 "uuid": "a362397c-a112-48d2-9944-51ad26890792", 00:13:05.273 "is_configured": true, 00:13:05.273 "data_offset": 2048, 00:13:05.273 "data_size": 63488 00:13:05.273 }, 00:13:05.273 { 00:13:05.273 "name": "BaseBdev3", 00:13:05.273 "uuid": "cab00b5f-5712-476e-a520-812d8bf4f3a6", 00:13:05.273 "is_configured": true, 00:13:05.273 "data_offset": 2048, 00:13:05.273 "data_size": 63488 00:13:05.273 } 00:13:05.273 ] 00:13:05.273 }' 00:13:05.273 18:16:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:05.273 18:16:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.530 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:05.530 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:05.530 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:05.530 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:05.530 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:05.530 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:05.530 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:05.530 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:05.788 [2024-07-24 18:16:14.259675] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.788 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:05.788 "name": "Existed_Raid", 00:13:05.788 "aliases": [ 00:13:05.788 "14f021d7-037e-4cd3-82ba-f08daca64451" 00:13:05.788 ], 00:13:05.788 "product_name": "Raid Volume", 00:13:05.788 "block_size": 512, 00:13:05.788 "num_blocks": 190464, 00:13:05.788 "uuid": "14f021d7-037e-4cd3-82ba-f08daca64451", 00:13:05.788 "assigned_rate_limits": { 00:13:05.788 "rw_ios_per_sec": 0, 00:13:05.788 "rw_mbytes_per_sec": 0, 00:13:05.788 "r_mbytes_per_sec": 0, 00:13:05.788 "w_mbytes_per_sec": 0 00:13:05.788 }, 00:13:05.788 "claimed": false, 00:13:05.788 "zoned": false, 00:13:05.788 "supported_io_types": { 00:13:05.788 "read": true, 00:13:05.788 "write": true, 00:13:05.788 "unmap": true, 00:13:05.788 "flush": true, 00:13:05.788 "reset": true, 00:13:05.788 "nvme_admin": false, 00:13:05.788 "nvme_io": false, 00:13:05.788 "nvme_io_md": false, 00:13:05.788 "write_zeroes": true, 00:13:05.788 "zcopy": false, 00:13:05.788 "get_zone_info": false, 00:13:05.788 "zone_management": false, 00:13:05.788 "zone_append": false, 00:13:05.788 "compare": false, 00:13:05.788 "compare_and_write": false, 00:13:05.788 "abort": false, 00:13:05.788 "seek_hole": false, 00:13:05.788 "seek_data": false, 00:13:05.788 "copy": false, 00:13:05.788 "nvme_iov_md": false 00:13:05.788 }, 00:13:05.788 "memory_domains": [ 00:13:05.788 { 00:13:05.788 "dma_device_id": "system", 00:13:05.788 "dma_device_type": 1 00:13:05.788 }, 00:13:05.788 { 00:13:05.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.788 "dma_device_type": 2 00:13:05.788 }, 00:13:05.788 { 00:13:05.788 "dma_device_id": "system", 00:13:05.788 "dma_device_type": 1 00:13:05.788 }, 00:13:05.788 { 00:13:05.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.788 "dma_device_type": 2 00:13:05.788 }, 00:13:05.788 { 00:13:05.788 "dma_device_id": "system", 00:13:05.788 "dma_device_type": 1 00:13:05.788 }, 00:13:05.788 { 00:13:05.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.788 "dma_device_type": 2 00:13:05.788 } 00:13:05.788 ], 00:13:05.788 "driver_specific": { 00:13:05.788 "raid": { 00:13:05.788 "uuid": "14f021d7-037e-4cd3-82ba-f08daca64451", 00:13:05.788 "strip_size_kb": 64, 00:13:05.788 "state": "online", 00:13:05.788 "raid_level": "concat", 00:13:05.788 "superblock": true, 00:13:05.788 "num_base_bdevs": 3, 00:13:05.788 "num_base_bdevs_discovered": 3, 00:13:05.788 "num_base_bdevs_operational": 3, 00:13:05.788 "base_bdevs_list": [ 00:13:05.788 { 00:13:05.788 "name": "BaseBdev1", 00:13:05.788 "uuid": "803c63b0-2584-4f59-801b-01db63e1ffc6", 00:13:05.788 "is_configured": true, 00:13:05.788 "data_offset": 2048, 00:13:05.788 "data_size": 63488 00:13:05.788 }, 00:13:05.788 { 00:13:05.788 "name": "BaseBdev2", 00:13:05.788 "uuid": "a362397c-a112-48d2-9944-51ad26890792", 00:13:05.788 "is_configured": true, 00:13:05.788 "data_offset": 2048, 00:13:05.788 "data_size": 63488 00:13:05.788 }, 00:13:05.788 { 00:13:05.788 "name": "BaseBdev3", 00:13:05.788 "uuid": "cab00b5f-5712-476e-a520-812d8bf4f3a6", 00:13:05.788 "is_configured": true, 00:13:05.788 "data_offset": 2048, 00:13:05.788 "data_size": 63488 00:13:05.788 } 00:13:05.788 ] 00:13:05.788 } 00:13:05.788 } 00:13:05.788 }' 00:13:05.788 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:05.788 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:05.788 BaseBdev2 00:13:05.788 BaseBdev3' 00:13:05.788 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:05.788 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:05.788 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:06.047 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:06.047 "name": "BaseBdev1", 00:13:06.047 "aliases": [ 00:13:06.047 "803c63b0-2584-4f59-801b-01db63e1ffc6" 00:13:06.047 ], 00:13:06.047 "product_name": "Malloc disk", 00:13:06.047 "block_size": 512, 00:13:06.047 "num_blocks": 65536, 00:13:06.047 "uuid": "803c63b0-2584-4f59-801b-01db63e1ffc6", 00:13:06.047 "assigned_rate_limits": { 00:13:06.047 "rw_ios_per_sec": 0, 00:13:06.047 "rw_mbytes_per_sec": 0, 00:13:06.047 "r_mbytes_per_sec": 0, 00:13:06.047 "w_mbytes_per_sec": 0 00:13:06.047 }, 00:13:06.047 "claimed": true, 00:13:06.047 "claim_type": "exclusive_write", 00:13:06.047 "zoned": false, 00:13:06.047 "supported_io_types": { 00:13:06.047 "read": true, 00:13:06.047 "write": true, 00:13:06.047 "unmap": true, 00:13:06.047 "flush": true, 00:13:06.047 "reset": true, 00:13:06.047 "nvme_admin": false, 00:13:06.047 "nvme_io": false, 00:13:06.047 "nvme_io_md": false, 00:13:06.047 "write_zeroes": true, 00:13:06.047 "zcopy": true, 00:13:06.047 "get_zone_info": false, 00:13:06.047 "zone_management": false, 00:13:06.047 "zone_append": false, 00:13:06.047 "compare": false, 00:13:06.047 "compare_and_write": false, 00:13:06.047 "abort": true, 00:13:06.047 "seek_hole": false, 00:13:06.047 "seek_data": false, 00:13:06.047 "copy": true, 00:13:06.047 "nvme_iov_md": false 00:13:06.047 }, 00:13:06.047 "memory_domains": [ 00:13:06.047 { 00:13:06.047 "dma_device_id": "system", 00:13:06.047 "dma_device_type": 1 00:13:06.047 }, 00:13:06.047 { 00:13:06.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.047 "dma_device_type": 2 00:13:06.047 } 00:13:06.047 ], 00:13:06.047 "driver_specific": {} 00:13:06.047 }' 00:13:06.047 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.047 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.047 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:06.047 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.047 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.047 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:06.305 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:06.305 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:06.305 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:06.305 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:06.305 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:06.305 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:06.305 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:06.305 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:06.305 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:06.563 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:06.563 "name": "BaseBdev2", 00:13:06.563 "aliases": [ 00:13:06.563 "a362397c-a112-48d2-9944-51ad26890792" 00:13:06.563 ], 00:13:06.563 "product_name": "Malloc disk", 00:13:06.563 "block_size": 512, 00:13:06.563 "num_blocks": 65536, 00:13:06.563 "uuid": "a362397c-a112-48d2-9944-51ad26890792", 00:13:06.563 "assigned_rate_limits": { 00:13:06.563 "rw_ios_per_sec": 0, 00:13:06.563 "rw_mbytes_per_sec": 0, 00:13:06.563 "r_mbytes_per_sec": 0, 00:13:06.563 "w_mbytes_per_sec": 0 00:13:06.563 }, 00:13:06.563 "claimed": true, 00:13:06.563 "claim_type": "exclusive_write", 00:13:06.563 "zoned": false, 00:13:06.563 "supported_io_types": { 00:13:06.563 "read": true, 00:13:06.563 "write": true, 00:13:06.563 "unmap": true, 00:13:06.563 "flush": true, 00:13:06.563 "reset": true, 00:13:06.563 "nvme_admin": false, 00:13:06.563 "nvme_io": false, 00:13:06.563 "nvme_io_md": false, 00:13:06.563 "write_zeroes": true, 00:13:06.563 "zcopy": true, 00:13:06.563 "get_zone_info": false, 00:13:06.563 "zone_management": false, 00:13:06.563 "zone_append": false, 00:13:06.563 "compare": false, 00:13:06.563 "compare_and_write": false, 00:13:06.563 "abort": true, 00:13:06.563 "seek_hole": false, 00:13:06.563 "seek_data": false, 00:13:06.563 "copy": true, 00:13:06.563 "nvme_iov_md": false 00:13:06.563 }, 00:13:06.563 "memory_domains": [ 00:13:06.563 { 00:13:06.563 "dma_device_id": "system", 00:13:06.563 "dma_device_type": 1 00:13:06.563 }, 00:13:06.563 { 00:13:06.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.563 "dma_device_type": 2 00:13:06.563 } 00:13:06.563 ], 00:13:06.563 "driver_specific": {} 00:13:06.563 }' 00:13:06.563 18:16:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.563 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.563 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:06.563 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.563 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.563 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:06.563 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:06.822 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:06.822 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:06.822 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:06.822 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:06.822 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:06.822 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:06.822 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:06.822 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:07.080 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:07.080 "name": "BaseBdev3", 00:13:07.080 "aliases": [ 00:13:07.080 "cab00b5f-5712-476e-a520-812d8bf4f3a6" 00:13:07.080 ], 00:13:07.080 "product_name": "Malloc disk", 00:13:07.080 "block_size": 512, 00:13:07.080 "num_blocks": 65536, 00:13:07.080 "uuid": "cab00b5f-5712-476e-a520-812d8bf4f3a6", 00:13:07.080 "assigned_rate_limits": { 00:13:07.080 "rw_ios_per_sec": 0, 00:13:07.080 "rw_mbytes_per_sec": 0, 00:13:07.080 "r_mbytes_per_sec": 0, 00:13:07.080 "w_mbytes_per_sec": 0 00:13:07.080 }, 00:13:07.080 "claimed": true, 00:13:07.080 "claim_type": "exclusive_write", 00:13:07.080 "zoned": false, 00:13:07.080 "supported_io_types": { 00:13:07.080 "read": true, 00:13:07.080 "write": true, 00:13:07.080 "unmap": true, 00:13:07.080 "flush": true, 00:13:07.080 "reset": true, 00:13:07.080 "nvme_admin": false, 00:13:07.080 "nvme_io": false, 00:13:07.080 "nvme_io_md": false, 00:13:07.080 "write_zeroes": true, 00:13:07.080 "zcopy": true, 00:13:07.080 "get_zone_info": false, 00:13:07.080 "zone_management": false, 00:13:07.080 "zone_append": false, 00:13:07.080 "compare": false, 00:13:07.080 "compare_and_write": false, 00:13:07.080 "abort": true, 00:13:07.080 "seek_hole": false, 00:13:07.080 "seek_data": false, 00:13:07.080 "copy": true, 00:13:07.080 "nvme_iov_md": false 00:13:07.080 }, 00:13:07.080 "memory_domains": [ 00:13:07.080 { 00:13:07.080 "dma_device_id": "system", 00:13:07.080 "dma_device_type": 1 00:13:07.080 }, 00:13:07.080 { 00:13:07.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.080 "dma_device_type": 2 00:13:07.080 } 00:13:07.080 ], 00:13:07.080 "driver_specific": {} 00:13:07.080 }' 00:13:07.080 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:07.080 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:07.080 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:07.080 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:07.080 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:07.080 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:07.080 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:07.080 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:07.338 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:07.339 [2024-07-24 18:16:15.915783] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:07.339 [2024-07-24 18:16:15.915800] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.339 [2024-07-24 18:16:15.915828] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.339 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.597 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.597 18:16:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.597 18:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:07.597 "name": "Existed_Raid", 00:13:07.597 "uuid": "14f021d7-037e-4cd3-82ba-f08daca64451", 00:13:07.597 "strip_size_kb": 64, 00:13:07.597 "state": "offline", 00:13:07.597 "raid_level": "concat", 00:13:07.597 "superblock": true, 00:13:07.597 "num_base_bdevs": 3, 00:13:07.597 "num_base_bdevs_discovered": 2, 00:13:07.597 "num_base_bdevs_operational": 2, 00:13:07.597 "base_bdevs_list": [ 00:13:07.597 { 00:13:07.597 "name": null, 00:13:07.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.597 "is_configured": false, 00:13:07.597 "data_offset": 2048, 00:13:07.597 "data_size": 63488 00:13:07.597 }, 00:13:07.597 { 00:13:07.597 "name": "BaseBdev2", 00:13:07.597 "uuid": "a362397c-a112-48d2-9944-51ad26890792", 00:13:07.597 "is_configured": true, 00:13:07.597 "data_offset": 2048, 00:13:07.597 "data_size": 63488 00:13:07.597 }, 00:13:07.597 { 00:13:07.597 "name": "BaseBdev3", 00:13:07.597 "uuid": "cab00b5f-5712-476e-a520-812d8bf4f3a6", 00:13:07.597 "is_configured": true, 00:13:07.597 "data_offset": 2048, 00:13:07.597 "data_size": 63488 00:13:07.597 } 00:13:07.597 ] 00:13:07.597 }' 00:13:07.597 18:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:07.597 18:16:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.162 18:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:08.162 18:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:08.162 18:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.162 18:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:08.419 18:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:08.419 18:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:08.419 18:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:08.419 [2024-07-24 18:16:16.919231] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:08.419 18:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:08.419 18:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:08.419 18:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:08.419 18:16:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.714 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:08.714 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:08.714 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:08.714 [2024-07-24 18:16:17.257558] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:08.714 [2024-07-24 18:16:17.257589] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1087980 name Existed_Raid, state offline 00:13:09.018 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:09.018 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:09.018 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.018 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:09.018 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:09.018 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:09.018 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:09.018 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:09.018 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:09.019 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:09.019 BaseBdev2 00:13:09.277 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:09.277 18:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:09.277 18:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:09.277 18:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:09.277 18:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:09.277 18:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:09.277 18:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:09.277 18:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:09.536 [ 00:13:09.536 { 00:13:09.536 "name": "BaseBdev2", 00:13:09.536 "aliases": [ 00:13:09.536 "40d1d961-580c-43dd-a3ff-2d1c13f2a65c" 00:13:09.536 ], 00:13:09.536 "product_name": "Malloc disk", 00:13:09.536 "block_size": 512, 00:13:09.536 "num_blocks": 65536, 00:13:09.536 "uuid": "40d1d961-580c-43dd-a3ff-2d1c13f2a65c", 00:13:09.536 "assigned_rate_limits": { 00:13:09.536 "rw_ios_per_sec": 0, 00:13:09.536 "rw_mbytes_per_sec": 0, 00:13:09.536 "r_mbytes_per_sec": 0, 00:13:09.536 "w_mbytes_per_sec": 0 00:13:09.536 }, 00:13:09.536 "claimed": false, 00:13:09.536 "zoned": false, 00:13:09.536 "supported_io_types": { 00:13:09.536 "read": true, 00:13:09.536 "write": true, 00:13:09.536 "unmap": true, 00:13:09.536 "flush": true, 00:13:09.536 "reset": true, 00:13:09.536 "nvme_admin": false, 00:13:09.536 "nvme_io": false, 00:13:09.536 "nvme_io_md": false, 00:13:09.536 "write_zeroes": true, 00:13:09.536 "zcopy": true, 00:13:09.536 "get_zone_info": false, 00:13:09.536 "zone_management": false, 00:13:09.536 "zone_append": false, 00:13:09.536 "compare": false, 00:13:09.536 "compare_and_write": false, 00:13:09.536 "abort": true, 00:13:09.536 "seek_hole": false, 00:13:09.536 "seek_data": false, 00:13:09.536 "copy": true, 00:13:09.536 "nvme_iov_md": false 00:13:09.536 }, 00:13:09.536 "memory_domains": [ 00:13:09.536 { 00:13:09.536 "dma_device_id": "system", 00:13:09.536 "dma_device_type": 1 00:13:09.536 }, 00:13:09.536 { 00:13:09.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.536 "dma_device_type": 2 00:13:09.536 } 00:13:09.536 ], 00:13:09.536 "driver_specific": {} 00:13:09.536 } 00:13:09.536 ] 00:13:09.536 18:16:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:09.536 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:09.536 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:09.536 18:16:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:09.536 BaseBdev3 00:13:09.536 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:09.536 18:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:09.536 18:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:09.536 18:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:09.536 18:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:09.536 18:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:09.536 18:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:09.795 18:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:10.053 [ 00:13:10.053 { 00:13:10.053 "name": "BaseBdev3", 00:13:10.053 "aliases": [ 00:13:10.053 "b6976c02-071c-4ab3-ad95-5af143c246c7" 00:13:10.053 ], 00:13:10.053 "product_name": "Malloc disk", 00:13:10.053 "block_size": 512, 00:13:10.053 "num_blocks": 65536, 00:13:10.053 "uuid": "b6976c02-071c-4ab3-ad95-5af143c246c7", 00:13:10.053 "assigned_rate_limits": { 00:13:10.053 "rw_ios_per_sec": 0, 00:13:10.053 "rw_mbytes_per_sec": 0, 00:13:10.054 "r_mbytes_per_sec": 0, 00:13:10.054 "w_mbytes_per_sec": 0 00:13:10.054 }, 00:13:10.054 "claimed": false, 00:13:10.054 "zoned": false, 00:13:10.054 "supported_io_types": { 00:13:10.054 "read": true, 00:13:10.054 "write": true, 00:13:10.054 "unmap": true, 00:13:10.054 "flush": true, 00:13:10.054 "reset": true, 00:13:10.054 "nvme_admin": false, 00:13:10.054 "nvme_io": false, 00:13:10.054 "nvme_io_md": false, 00:13:10.054 "write_zeroes": true, 00:13:10.054 "zcopy": true, 00:13:10.054 "get_zone_info": false, 00:13:10.054 "zone_management": false, 00:13:10.054 "zone_append": false, 00:13:10.054 "compare": false, 00:13:10.054 "compare_and_write": false, 00:13:10.054 "abort": true, 00:13:10.054 "seek_hole": false, 00:13:10.054 "seek_data": false, 00:13:10.054 "copy": true, 00:13:10.054 "nvme_iov_md": false 00:13:10.054 }, 00:13:10.054 "memory_domains": [ 00:13:10.054 { 00:13:10.054 "dma_device_id": "system", 00:13:10.054 "dma_device_type": 1 00:13:10.054 }, 00:13:10.054 { 00:13:10.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.054 "dma_device_type": 2 00:13:10.054 } 00:13:10.054 ], 00:13:10.054 "driver_specific": {} 00:13:10.054 } 00:13:10.054 ] 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:10.054 [2024-07-24 18:16:18.578455] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:10.054 [2024-07-24 18:16:18.578484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:10.054 [2024-07-24 18:16:18.578496] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:10.054 [2024-07-24 18:16:18.579401] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.054 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.312 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:10.312 "name": "Existed_Raid", 00:13:10.312 "uuid": "9d631ff4-0aca-4f7f-ae3c-54c9f7287044", 00:13:10.312 "strip_size_kb": 64, 00:13:10.312 "state": "configuring", 00:13:10.312 "raid_level": "concat", 00:13:10.312 "superblock": true, 00:13:10.312 "num_base_bdevs": 3, 00:13:10.312 "num_base_bdevs_discovered": 2, 00:13:10.312 "num_base_bdevs_operational": 3, 00:13:10.312 "base_bdevs_list": [ 00:13:10.312 { 00:13:10.312 "name": "BaseBdev1", 00:13:10.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.312 "is_configured": false, 00:13:10.312 "data_offset": 0, 00:13:10.312 "data_size": 0 00:13:10.312 }, 00:13:10.312 { 00:13:10.312 "name": "BaseBdev2", 00:13:10.312 "uuid": "40d1d961-580c-43dd-a3ff-2d1c13f2a65c", 00:13:10.312 "is_configured": true, 00:13:10.312 "data_offset": 2048, 00:13:10.312 "data_size": 63488 00:13:10.312 }, 00:13:10.312 { 00:13:10.312 "name": "BaseBdev3", 00:13:10.312 "uuid": "b6976c02-071c-4ab3-ad95-5af143c246c7", 00:13:10.313 "is_configured": true, 00:13:10.313 "data_offset": 2048, 00:13:10.313 "data_size": 63488 00:13:10.313 } 00:13:10.313 ] 00:13:10.313 }' 00:13:10.313 18:16:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:10.313 18:16:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.879 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:10.879 [2024-07-24 18:16:19.392527] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:10.879 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:10.879 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:10.879 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:10.879 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:10.879 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:10.879 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:10.879 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:10.879 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:10.879 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:10.879 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:10.879 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.879 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.137 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:11.137 "name": "Existed_Raid", 00:13:11.137 "uuid": "9d631ff4-0aca-4f7f-ae3c-54c9f7287044", 00:13:11.137 "strip_size_kb": 64, 00:13:11.137 "state": "configuring", 00:13:11.137 "raid_level": "concat", 00:13:11.137 "superblock": true, 00:13:11.137 "num_base_bdevs": 3, 00:13:11.137 "num_base_bdevs_discovered": 1, 00:13:11.137 "num_base_bdevs_operational": 3, 00:13:11.137 "base_bdevs_list": [ 00:13:11.137 { 00:13:11.137 "name": "BaseBdev1", 00:13:11.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.137 "is_configured": false, 00:13:11.137 "data_offset": 0, 00:13:11.137 "data_size": 0 00:13:11.137 }, 00:13:11.137 { 00:13:11.137 "name": null, 00:13:11.137 "uuid": "40d1d961-580c-43dd-a3ff-2d1c13f2a65c", 00:13:11.137 "is_configured": false, 00:13:11.137 "data_offset": 2048, 00:13:11.137 "data_size": 63488 00:13:11.137 }, 00:13:11.137 { 00:13:11.137 "name": "BaseBdev3", 00:13:11.137 "uuid": "b6976c02-071c-4ab3-ad95-5af143c246c7", 00:13:11.137 "is_configured": true, 00:13:11.137 "data_offset": 2048, 00:13:11.137 "data_size": 63488 00:13:11.137 } 00:13:11.137 ] 00:13:11.137 }' 00:13:11.137 18:16:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:11.137 18:16:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.704 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.704 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:11.704 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:11.704 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:11.962 [2024-07-24 18:16:20.413968] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.962 BaseBdev1 00:13:11.962 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:11.962 18:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:11.962 18:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:11.962 18:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:11.962 18:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:11.962 18:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:11.962 18:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:12.220 18:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:12.220 [ 00:13:12.220 { 00:13:12.220 "name": "BaseBdev1", 00:13:12.220 "aliases": [ 00:13:12.220 "1463c53a-a147-47c0-8711-c7f11ac90dc1" 00:13:12.220 ], 00:13:12.220 "product_name": "Malloc disk", 00:13:12.220 "block_size": 512, 00:13:12.220 "num_blocks": 65536, 00:13:12.220 "uuid": "1463c53a-a147-47c0-8711-c7f11ac90dc1", 00:13:12.220 "assigned_rate_limits": { 00:13:12.220 "rw_ios_per_sec": 0, 00:13:12.220 "rw_mbytes_per_sec": 0, 00:13:12.220 "r_mbytes_per_sec": 0, 00:13:12.220 "w_mbytes_per_sec": 0 00:13:12.220 }, 00:13:12.220 "claimed": true, 00:13:12.220 "claim_type": "exclusive_write", 00:13:12.220 "zoned": false, 00:13:12.220 "supported_io_types": { 00:13:12.220 "read": true, 00:13:12.220 "write": true, 00:13:12.220 "unmap": true, 00:13:12.220 "flush": true, 00:13:12.220 "reset": true, 00:13:12.220 "nvme_admin": false, 00:13:12.220 "nvme_io": false, 00:13:12.220 "nvme_io_md": false, 00:13:12.220 "write_zeroes": true, 00:13:12.220 "zcopy": true, 00:13:12.220 "get_zone_info": false, 00:13:12.220 "zone_management": false, 00:13:12.220 "zone_append": false, 00:13:12.220 "compare": false, 00:13:12.220 "compare_and_write": false, 00:13:12.220 "abort": true, 00:13:12.220 "seek_hole": false, 00:13:12.220 "seek_data": false, 00:13:12.220 "copy": true, 00:13:12.220 "nvme_iov_md": false 00:13:12.220 }, 00:13:12.220 "memory_domains": [ 00:13:12.220 { 00:13:12.220 "dma_device_id": "system", 00:13:12.220 "dma_device_type": 1 00:13:12.220 }, 00:13:12.220 { 00:13:12.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.220 "dma_device_type": 2 00:13:12.220 } 00:13:12.220 ], 00:13:12.220 "driver_specific": {} 00:13:12.220 } 00:13:12.220 ] 00:13:12.220 18:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:12.220 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:12.220 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:12.220 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:12.220 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:12.220 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:12.220 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:12.220 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:12.221 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:12.221 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:12.221 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:12.221 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.221 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.479 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:12.479 "name": "Existed_Raid", 00:13:12.479 "uuid": "9d631ff4-0aca-4f7f-ae3c-54c9f7287044", 00:13:12.479 "strip_size_kb": 64, 00:13:12.479 "state": "configuring", 00:13:12.479 "raid_level": "concat", 00:13:12.479 "superblock": true, 00:13:12.479 "num_base_bdevs": 3, 00:13:12.479 "num_base_bdevs_discovered": 2, 00:13:12.479 "num_base_bdevs_operational": 3, 00:13:12.479 "base_bdevs_list": [ 00:13:12.479 { 00:13:12.479 "name": "BaseBdev1", 00:13:12.479 "uuid": "1463c53a-a147-47c0-8711-c7f11ac90dc1", 00:13:12.479 "is_configured": true, 00:13:12.479 "data_offset": 2048, 00:13:12.479 "data_size": 63488 00:13:12.479 }, 00:13:12.479 { 00:13:12.479 "name": null, 00:13:12.479 "uuid": "40d1d961-580c-43dd-a3ff-2d1c13f2a65c", 00:13:12.479 "is_configured": false, 00:13:12.479 "data_offset": 2048, 00:13:12.479 "data_size": 63488 00:13:12.479 }, 00:13:12.479 { 00:13:12.479 "name": "BaseBdev3", 00:13:12.479 "uuid": "b6976c02-071c-4ab3-ad95-5af143c246c7", 00:13:12.479 "is_configured": true, 00:13:12.479 "data_offset": 2048, 00:13:12.479 "data_size": 63488 00:13:12.479 } 00:13:12.479 ] 00:13:12.479 }' 00:13:12.479 18:16:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:12.479 18:16:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.046 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.046 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:13.046 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:13.046 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:13.304 [2024-07-24 18:16:21.713352] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:13.304 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:13.304 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:13.304 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:13.304 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:13.304 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:13.304 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:13.305 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:13.305 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:13.305 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:13.305 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:13.305 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.305 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.563 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:13.563 "name": "Existed_Raid", 00:13:13.563 "uuid": "9d631ff4-0aca-4f7f-ae3c-54c9f7287044", 00:13:13.563 "strip_size_kb": 64, 00:13:13.563 "state": "configuring", 00:13:13.563 "raid_level": "concat", 00:13:13.563 "superblock": true, 00:13:13.563 "num_base_bdevs": 3, 00:13:13.563 "num_base_bdevs_discovered": 1, 00:13:13.563 "num_base_bdevs_operational": 3, 00:13:13.563 "base_bdevs_list": [ 00:13:13.563 { 00:13:13.563 "name": "BaseBdev1", 00:13:13.563 "uuid": "1463c53a-a147-47c0-8711-c7f11ac90dc1", 00:13:13.563 "is_configured": true, 00:13:13.563 "data_offset": 2048, 00:13:13.563 "data_size": 63488 00:13:13.563 }, 00:13:13.563 { 00:13:13.563 "name": null, 00:13:13.563 "uuid": "40d1d961-580c-43dd-a3ff-2d1c13f2a65c", 00:13:13.563 "is_configured": false, 00:13:13.563 "data_offset": 2048, 00:13:13.563 "data_size": 63488 00:13:13.563 }, 00:13:13.563 { 00:13:13.563 "name": null, 00:13:13.563 "uuid": "b6976c02-071c-4ab3-ad95-5af143c246c7", 00:13:13.563 "is_configured": false, 00:13:13.563 "data_offset": 2048, 00:13:13.563 "data_size": 63488 00:13:13.563 } 00:13:13.563 ] 00:13:13.563 }' 00:13:13.563 18:16:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:13.563 18:16:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.822 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.822 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:14.080 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:14.080 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:14.339 [2024-07-24 18:16:22.744011] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:14.339 "name": "Existed_Raid", 00:13:14.339 "uuid": "9d631ff4-0aca-4f7f-ae3c-54c9f7287044", 00:13:14.339 "strip_size_kb": 64, 00:13:14.339 "state": "configuring", 00:13:14.339 "raid_level": "concat", 00:13:14.339 "superblock": true, 00:13:14.339 "num_base_bdevs": 3, 00:13:14.339 "num_base_bdevs_discovered": 2, 00:13:14.339 "num_base_bdevs_operational": 3, 00:13:14.339 "base_bdevs_list": [ 00:13:14.339 { 00:13:14.339 "name": "BaseBdev1", 00:13:14.339 "uuid": "1463c53a-a147-47c0-8711-c7f11ac90dc1", 00:13:14.339 "is_configured": true, 00:13:14.339 "data_offset": 2048, 00:13:14.339 "data_size": 63488 00:13:14.339 }, 00:13:14.339 { 00:13:14.339 "name": null, 00:13:14.339 "uuid": "40d1d961-580c-43dd-a3ff-2d1c13f2a65c", 00:13:14.339 "is_configured": false, 00:13:14.339 "data_offset": 2048, 00:13:14.339 "data_size": 63488 00:13:14.339 }, 00:13:14.339 { 00:13:14.339 "name": "BaseBdev3", 00:13:14.339 "uuid": "b6976c02-071c-4ab3-ad95-5af143c246c7", 00:13:14.339 "is_configured": true, 00:13:14.339 "data_offset": 2048, 00:13:14.339 "data_size": 63488 00:13:14.339 } 00:13:14.339 ] 00:13:14.339 }' 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:14.339 18:16:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.907 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:14.907 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.165 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:15.165 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:15.165 [2024-07-24 18:16:23.754637] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:15.424 "name": "Existed_Raid", 00:13:15.424 "uuid": "9d631ff4-0aca-4f7f-ae3c-54c9f7287044", 00:13:15.424 "strip_size_kb": 64, 00:13:15.424 "state": "configuring", 00:13:15.424 "raid_level": "concat", 00:13:15.424 "superblock": true, 00:13:15.424 "num_base_bdevs": 3, 00:13:15.424 "num_base_bdevs_discovered": 1, 00:13:15.424 "num_base_bdevs_operational": 3, 00:13:15.424 "base_bdevs_list": [ 00:13:15.424 { 00:13:15.424 "name": null, 00:13:15.424 "uuid": "1463c53a-a147-47c0-8711-c7f11ac90dc1", 00:13:15.424 "is_configured": false, 00:13:15.424 "data_offset": 2048, 00:13:15.424 "data_size": 63488 00:13:15.424 }, 00:13:15.424 { 00:13:15.424 "name": null, 00:13:15.424 "uuid": "40d1d961-580c-43dd-a3ff-2d1c13f2a65c", 00:13:15.424 "is_configured": false, 00:13:15.424 "data_offset": 2048, 00:13:15.424 "data_size": 63488 00:13:15.424 }, 00:13:15.424 { 00:13:15.424 "name": "BaseBdev3", 00:13:15.424 "uuid": "b6976c02-071c-4ab3-ad95-5af143c246c7", 00:13:15.424 "is_configured": true, 00:13:15.424 "data_offset": 2048, 00:13:15.424 "data_size": 63488 00:13:15.424 } 00:13:15.424 ] 00:13:15.424 }' 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:15.424 18:16:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.990 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.990 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:15.990 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:15.990 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:16.249 [2024-07-24 18:16:24.730760] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.249 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:16.249 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:16.249 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:16.249 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:16.249 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:16.249 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:16.249 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:16.249 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:16.249 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:16.249 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:16.249 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.249 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.508 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:16.508 "name": "Existed_Raid", 00:13:16.508 "uuid": "9d631ff4-0aca-4f7f-ae3c-54c9f7287044", 00:13:16.508 "strip_size_kb": 64, 00:13:16.508 "state": "configuring", 00:13:16.508 "raid_level": "concat", 00:13:16.508 "superblock": true, 00:13:16.508 "num_base_bdevs": 3, 00:13:16.508 "num_base_bdevs_discovered": 2, 00:13:16.508 "num_base_bdevs_operational": 3, 00:13:16.508 "base_bdevs_list": [ 00:13:16.508 { 00:13:16.508 "name": null, 00:13:16.508 "uuid": "1463c53a-a147-47c0-8711-c7f11ac90dc1", 00:13:16.508 "is_configured": false, 00:13:16.508 "data_offset": 2048, 00:13:16.508 "data_size": 63488 00:13:16.508 }, 00:13:16.508 { 00:13:16.508 "name": "BaseBdev2", 00:13:16.508 "uuid": "40d1d961-580c-43dd-a3ff-2d1c13f2a65c", 00:13:16.508 "is_configured": true, 00:13:16.508 "data_offset": 2048, 00:13:16.508 "data_size": 63488 00:13:16.508 }, 00:13:16.508 { 00:13:16.508 "name": "BaseBdev3", 00:13:16.508 "uuid": "b6976c02-071c-4ab3-ad95-5af143c246c7", 00:13:16.508 "is_configured": true, 00:13:16.508 "data_offset": 2048, 00:13:16.508 "data_size": 63488 00:13:16.508 } 00:13:16.508 ] 00:13:16.508 }' 00:13:16.508 18:16:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:16.508 18:16:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.074 18:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.074 18:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:17.074 18:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:17.074 18:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.074 18:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:17.333 18:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1463c53a-a147-47c0-8711-c7f11ac90dc1 00:13:17.333 [2024-07-24 18:16:25.872422] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:17.333 [2024-07-24 18:16:25.872525] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1085d00 00:13:17.333 [2024-07-24 18:16:25.872533] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:17.333 [2024-07-24 18:16:25.872646] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1239cd0 00:13:17.333 [2024-07-24 18:16:25.872738] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1085d00 00:13:17.333 [2024-07-24 18:16:25.872744] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1085d00 00:13:17.333 [2024-07-24 18:16:25.872804] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.333 NewBaseBdev 00:13:17.333 18:16:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:17.333 18:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:17.333 18:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:17.333 18:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:17.333 18:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:17.333 18:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:17.333 18:16:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:17.591 18:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:17.850 [ 00:13:17.850 { 00:13:17.850 "name": "NewBaseBdev", 00:13:17.850 "aliases": [ 00:13:17.850 "1463c53a-a147-47c0-8711-c7f11ac90dc1" 00:13:17.850 ], 00:13:17.850 "product_name": "Malloc disk", 00:13:17.850 "block_size": 512, 00:13:17.850 "num_blocks": 65536, 00:13:17.850 "uuid": "1463c53a-a147-47c0-8711-c7f11ac90dc1", 00:13:17.850 "assigned_rate_limits": { 00:13:17.850 "rw_ios_per_sec": 0, 00:13:17.850 "rw_mbytes_per_sec": 0, 00:13:17.850 "r_mbytes_per_sec": 0, 00:13:17.850 "w_mbytes_per_sec": 0 00:13:17.850 }, 00:13:17.850 "claimed": true, 00:13:17.850 "claim_type": "exclusive_write", 00:13:17.850 "zoned": false, 00:13:17.850 "supported_io_types": { 00:13:17.850 "read": true, 00:13:17.850 "write": true, 00:13:17.850 "unmap": true, 00:13:17.850 "flush": true, 00:13:17.850 "reset": true, 00:13:17.850 "nvme_admin": false, 00:13:17.850 "nvme_io": false, 00:13:17.850 "nvme_io_md": false, 00:13:17.850 "write_zeroes": true, 00:13:17.850 "zcopy": true, 00:13:17.850 "get_zone_info": false, 00:13:17.850 "zone_management": false, 00:13:17.850 "zone_append": false, 00:13:17.850 "compare": false, 00:13:17.850 "compare_and_write": false, 00:13:17.850 "abort": true, 00:13:17.850 "seek_hole": false, 00:13:17.850 "seek_data": false, 00:13:17.850 "copy": true, 00:13:17.850 "nvme_iov_md": false 00:13:17.850 }, 00:13:17.850 "memory_domains": [ 00:13:17.850 { 00:13:17.850 "dma_device_id": "system", 00:13:17.850 "dma_device_type": 1 00:13:17.850 }, 00:13:17.850 { 00:13:17.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.850 "dma_device_type": 2 00:13:17.850 } 00:13:17.850 ], 00:13:17.850 "driver_specific": {} 00:13:17.850 } 00:13:17.850 ] 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.850 "name": "Existed_Raid", 00:13:17.850 "uuid": "9d631ff4-0aca-4f7f-ae3c-54c9f7287044", 00:13:17.850 "strip_size_kb": 64, 00:13:17.850 "state": "online", 00:13:17.850 "raid_level": "concat", 00:13:17.850 "superblock": true, 00:13:17.850 "num_base_bdevs": 3, 00:13:17.850 "num_base_bdevs_discovered": 3, 00:13:17.850 "num_base_bdevs_operational": 3, 00:13:17.850 "base_bdevs_list": [ 00:13:17.850 { 00:13:17.850 "name": "NewBaseBdev", 00:13:17.850 "uuid": "1463c53a-a147-47c0-8711-c7f11ac90dc1", 00:13:17.850 "is_configured": true, 00:13:17.850 "data_offset": 2048, 00:13:17.850 "data_size": 63488 00:13:17.850 }, 00:13:17.850 { 00:13:17.850 "name": "BaseBdev2", 00:13:17.850 "uuid": "40d1d961-580c-43dd-a3ff-2d1c13f2a65c", 00:13:17.850 "is_configured": true, 00:13:17.850 "data_offset": 2048, 00:13:17.850 "data_size": 63488 00:13:17.850 }, 00:13:17.850 { 00:13:17.850 "name": "BaseBdev3", 00:13:17.850 "uuid": "b6976c02-071c-4ab3-ad95-5af143c246c7", 00:13:17.850 "is_configured": true, 00:13:17.850 "data_offset": 2048, 00:13:17.850 "data_size": 63488 00:13:17.850 } 00:13:17.850 ] 00:13:17.850 }' 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.850 18:16:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.416 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:18.416 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:18.417 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:18.417 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:18.417 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:18.417 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:18.417 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:18.417 18:16:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:18.675 [2024-07-24 18:16:27.019572] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.675 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:18.675 "name": "Existed_Raid", 00:13:18.675 "aliases": [ 00:13:18.675 "9d631ff4-0aca-4f7f-ae3c-54c9f7287044" 00:13:18.675 ], 00:13:18.675 "product_name": "Raid Volume", 00:13:18.675 "block_size": 512, 00:13:18.675 "num_blocks": 190464, 00:13:18.675 "uuid": "9d631ff4-0aca-4f7f-ae3c-54c9f7287044", 00:13:18.675 "assigned_rate_limits": { 00:13:18.675 "rw_ios_per_sec": 0, 00:13:18.675 "rw_mbytes_per_sec": 0, 00:13:18.675 "r_mbytes_per_sec": 0, 00:13:18.675 "w_mbytes_per_sec": 0 00:13:18.675 }, 00:13:18.675 "claimed": false, 00:13:18.675 "zoned": false, 00:13:18.675 "supported_io_types": { 00:13:18.675 "read": true, 00:13:18.676 "write": true, 00:13:18.676 "unmap": true, 00:13:18.676 "flush": true, 00:13:18.676 "reset": true, 00:13:18.676 "nvme_admin": false, 00:13:18.676 "nvme_io": false, 00:13:18.676 "nvme_io_md": false, 00:13:18.676 "write_zeroes": true, 00:13:18.676 "zcopy": false, 00:13:18.676 "get_zone_info": false, 00:13:18.676 "zone_management": false, 00:13:18.676 "zone_append": false, 00:13:18.676 "compare": false, 00:13:18.676 "compare_and_write": false, 00:13:18.676 "abort": false, 00:13:18.676 "seek_hole": false, 00:13:18.676 "seek_data": false, 00:13:18.676 "copy": false, 00:13:18.676 "nvme_iov_md": false 00:13:18.676 }, 00:13:18.676 "memory_domains": [ 00:13:18.676 { 00:13:18.676 "dma_device_id": "system", 00:13:18.676 "dma_device_type": 1 00:13:18.676 }, 00:13:18.676 { 00:13:18.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.676 "dma_device_type": 2 00:13:18.676 }, 00:13:18.676 { 00:13:18.676 "dma_device_id": "system", 00:13:18.676 "dma_device_type": 1 00:13:18.676 }, 00:13:18.676 { 00:13:18.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.676 "dma_device_type": 2 00:13:18.676 }, 00:13:18.676 { 00:13:18.676 "dma_device_id": "system", 00:13:18.676 "dma_device_type": 1 00:13:18.676 }, 00:13:18.676 { 00:13:18.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.676 "dma_device_type": 2 00:13:18.676 } 00:13:18.676 ], 00:13:18.676 "driver_specific": { 00:13:18.676 "raid": { 00:13:18.676 "uuid": "9d631ff4-0aca-4f7f-ae3c-54c9f7287044", 00:13:18.676 "strip_size_kb": 64, 00:13:18.676 "state": "online", 00:13:18.676 "raid_level": "concat", 00:13:18.676 "superblock": true, 00:13:18.676 "num_base_bdevs": 3, 00:13:18.676 "num_base_bdevs_discovered": 3, 00:13:18.676 "num_base_bdevs_operational": 3, 00:13:18.676 "base_bdevs_list": [ 00:13:18.676 { 00:13:18.676 "name": "NewBaseBdev", 00:13:18.676 "uuid": "1463c53a-a147-47c0-8711-c7f11ac90dc1", 00:13:18.676 "is_configured": true, 00:13:18.676 "data_offset": 2048, 00:13:18.676 "data_size": 63488 00:13:18.676 }, 00:13:18.676 { 00:13:18.676 "name": "BaseBdev2", 00:13:18.676 "uuid": "40d1d961-580c-43dd-a3ff-2d1c13f2a65c", 00:13:18.676 "is_configured": true, 00:13:18.676 "data_offset": 2048, 00:13:18.676 "data_size": 63488 00:13:18.676 }, 00:13:18.676 { 00:13:18.676 "name": "BaseBdev3", 00:13:18.676 "uuid": "b6976c02-071c-4ab3-ad95-5af143c246c7", 00:13:18.676 "is_configured": true, 00:13:18.676 "data_offset": 2048, 00:13:18.676 "data_size": 63488 00:13:18.676 } 00:13:18.676 ] 00:13:18.676 } 00:13:18.676 } 00:13:18.676 }' 00:13:18.676 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:18.676 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:18.676 BaseBdev2 00:13:18.676 BaseBdev3' 00:13:18.676 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:18.676 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:18.676 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:18.676 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:18.676 "name": "NewBaseBdev", 00:13:18.676 "aliases": [ 00:13:18.676 "1463c53a-a147-47c0-8711-c7f11ac90dc1" 00:13:18.676 ], 00:13:18.676 "product_name": "Malloc disk", 00:13:18.676 "block_size": 512, 00:13:18.676 "num_blocks": 65536, 00:13:18.676 "uuid": "1463c53a-a147-47c0-8711-c7f11ac90dc1", 00:13:18.676 "assigned_rate_limits": { 00:13:18.676 "rw_ios_per_sec": 0, 00:13:18.676 "rw_mbytes_per_sec": 0, 00:13:18.676 "r_mbytes_per_sec": 0, 00:13:18.676 "w_mbytes_per_sec": 0 00:13:18.676 }, 00:13:18.676 "claimed": true, 00:13:18.676 "claim_type": "exclusive_write", 00:13:18.676 "zoned": false, 00:13:18.676 "supported_io_types": { 00:13:18.676 "read": true, 00:13:18.676 "write": true, 00:13:18.676 "unmap": true, 00:13:18.676 "flush": true, 00:13:18.676 "reset": true, 00:13:18.676 "nvme_admin": false, 00:13:18.676 "nvme_io": false, 00:13:18.676 "nvme_io_md": false, 00:13:18.676 "write_zeroes": true, 00:13:18.676 "zcopy": true, 00:13:18.676 "get_zone_info": false, 00:13:18.676 "zone_management": false, 00:13:18.676 "zone_append": false, 00:13:18.676 "compare": false, 00:13:18.676 "compare_and_write": false, 00:13:18.676 "abort": true, 00:13:18.676 "seek_hole": false, 00:13:18.676 "seek_data": false, 00:13:18.676 "copy": true, 00:13:18.676 "nvme_iov_md": false 00:13:18.676 }, 00:13:18.676 "memory_domains": [ 00:13:18.676 { 00:13:18.676 "dma_device_id": "system", 00:13:18.676 "dma_device_type": 1 00:13:18.676 }, 00:13:18.676 { 00:13:18.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.676 "dma_device_type": 2 00:13:18.676 } 00:13:18.676 ], 00:13:18.676 "driver_specific": {} 00:13:18.676 }' 00:13:18.676 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:18.935 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:18.935 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:18.935 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:18.935 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:18.935 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:18.935 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:18.935 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:18.935 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:18.935 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:18.935 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.193 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:19.193 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:19.193 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:19.193 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:19.193 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:19.193 "name": "BaseBdev2", 00:13:19.193 "aliases": [ 00:13:19.193 "40d1d961-580c-43dd-a3ff-2d1c13f2a65c" 00:13:19.193 ], 00:13:19.193 "product_name": "Malloc disk", 00:13:19.193 "block_size": 512, 00:13:19.193 "num_blocks": 65536, 00:13:19.193 "uuid": "40d1d961-580c-43dd-a3ff-2d1c13f2a65c", 00:13:19.193 "assigned_rate_limits": { 00:13:19.193 "rw_ios_per_sec": 0, 00:13:19.193 "rw_mbytes_per_sec": 0, 00:13:19.193 "r_mbytes_per_sec": 0, 00:13:19.193 "w_mbytes_per_sec": 0 00:13:19.193 }, 00:13:19.193 "claimed": true, 00:13:19.193 "claim_type": "exclusive_write", 00:13:19.193 "zoned": false, 00:13:19.193 "supported_io_types": { 00:13:19.193 "read": true, 00:13:19.193 "write": true, 00:13:19.193 "unmap": true, 00:13:19.193 "flush": true, 00:13:19.193 "reset": true, 00:13:19.193 "nvme_admin": false, 00:13:19.193 "nvme_io": false, 00:13:19.193 "nvme_io_md": false, 00:13:19.193 "write_zeroes": true, 00:13:19.193 "zcopy": true, 00:13:19.193 "get_zone_info": false, 00:13:19.193 "zone_management": false, 00:13:19.193 "zone_append": false, 00:13:19.193 "compare": false, 00:13:19.193 "compare_and_write": false, 00:13:19.193 "abort": true, 00:13:19.193 "seek_hole": false, 00:13:19.193 "seek_data": false, 00:13:19.193 "copy": true, 00:13:19.193 "nvme_iov_md": false 00:13:19.193 }, 00:13:19.193 "memory_domains": [ 00:13:19.193 { 00:13:19.193 "dma_device_id": "system", 00:13:19.193 "dma_device_type": 1 00:13:19.193 }, 00:13:19.193 { 00:13:19.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.193 "dma_device_type": 2 00:13:19.193 } 00:13:19.193 ], 00:13:19.193 "driver_specific": {} 00:13:19.193 }' 00:13:19.193 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.193 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.452 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:19.452 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.452 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.452 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:19.452 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.452 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.452 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:19.452 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.452 18:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.452 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:19.452 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:19.452 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:19.452 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:19.711 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:19.711 "name": "BaseBdev3", 00:13:19.711 "aliases": [ 00:13:19.711 "b6976c02-071c-4ab3-ad95-5af143c246c7" 00:13:19.711 ], 00:13:19.711 "product_name": "Malloc disk", 00:13:19.711 "block_size": 512, 00:13:19.711 "num_blocks": 65536, 00:13:19.711 "uuid": "b6976c02-071c-4ab3-ad95-5af143c246c7", 00:13:19.711 "assigned_rate_limits": { 00:13:19.711 "rw_ios_per_sec": 0, 00:13:19.711 "rw_mbytes_per_sec": 0, 00:13:19.711 "r_mbytes_per_sec": 0, 00:13:19.711 "w_mbytes_per_sec": 0 00:13:19.711 }, 00:13:19.711 "claimed": true, 00:13:19.711 "claim_type": "exclusive_write", 00:13:19.711 "zoned": false, 00:13:19.711 "supported_io_types": { 00:13:19.711 "read": true, 00:13:19.711 "write": true, 00:13:19.711 "unmap": true, 00:13:19.711 "flush": true, 00:13:19.711 "reset": true, 00:13:19.711 "nvme_admin": false, 00:13:19.711 "nvme_io": false, 00:13:19.711 "nvme_io_md": false, 00:13:19.711 "write_zeroes": true, 00:13:19.711 "zcopy": true, 00:13:19.711 "get_zone_info": false, 00:13:19.711 "zone_management": false, 00:13:19.711 "zone_append": false, 00:13:19.711 "compare": false, 00:13:19.711 "compare_and_write": false, 00:13:19.711 "abort": true, 00:13:19.711 "seek_hole": false, 00:13:19.711 "seek_data": false, 00:13:19.711 "copy": true, 00:13:19.711 "nvme_iov_md": false 00:13:19.711 }, 00:13:19.711 "memory_domains": [ 00:13:19.711 { 00:13:19.711 "dma_device_id": "system", 00:13:19.711 "dma_device_type": 1 00:13:19.711 }, 00:13:19.711 { 00:13:19.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.711 "dma_device_type": 2 00:13:19.711 } 00:13:19.711 ], 00:13:19.711 "driver_specific": {} 00:13:19.712 }' 00:13:19.712 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.712 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.712 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:19.712 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.970 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.970 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:19.970 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.970 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.970 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:19.970 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.970 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.970 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:19.970 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:20.229 [2024-07-24 18:16:28.671661] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:20.229 [2024-07-24 18:16:28.671679] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:20.229 [2024-07-24 18:16:28.671713] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:20.229 [2024-07-24 18:16:28.671747] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:20.229 [2024-07-24 18:16:28.671755] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1085d00 name Existed_Raid, state offline 00:13:20.229 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2189116 00:13:20.229 18:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2189116 ']' 00:13:20.229 18:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2189116 00:13:20.229 18:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:20.229 18:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:20.229 18:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2189116 00:13:20.229 18:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:20.230 18:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:20.230 18:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2189116' 00:13:20.230 killing process with pid 2189116 00:13:20.230 18:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2189116 00:13:20.230 [2024-07-24 18:16:28.742728] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:20.230 18:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2189116 00:13:20.230 [2024-07-24 18:16:28.765249] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.489 18:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:20.489 00:13:20.489 real 0m21.166s 00:13:20.489 user 0m38.705s 00:13:20.489 sys 0m4.031s 00:13:20.489 18:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.489 18:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.489 ************************************ 00:13:20.489 END TEST raid_state_function_test_sb 00:13:20.489 ************************************ 00:13:20.489 18:16:28 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:13:20.489 18:16:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:20.489 18:16:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.489 18:16:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:20.489 ************************************ 00:13:20.489 START TEST raid_superblock_test 00:13:20.489 ************************************ 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2193368 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2193368 /var/tmp/spdk-raid.sock 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2193368 ']' 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:20.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.489 18:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.489 [2024-07-24 18:16:29.080801] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:13:20.489 [2024-07-24 18:16:29.080847] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193368 ] 00:13:20.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.747 EAL: Requested device 0000:b3:01.0 cannot be used 00:13:20.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.747 EAL: Requested device 0000:b3:01.1 cannot be used 00:13:20.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.747 EAL: Requested device 0000:b3:01.2 cannot be used 00:13:20.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.747 EAL: Requested device 0000:b3:01.3 cannot be used 00:13:20.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.747 EAL: Requested device 0000:b3:01.4 cannot be used 00:13:20.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.747 EAL: Requested device 0000:b3:01.5 cannot be used 00:13:20.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.747 EAL: Requested device 0000:b3:01.6 cannot be used 00:13:20.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.747 EAL: Requested device 0000:b3:01.7 cannot be used 00:13:20.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.747 EAL: Requested device 0000:b3:02.0 cannot be used 00:13:20.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.747 EAL: Requested device 0000:b3:02.1 cannot be used 00:13:20.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.747 EAL: Requested device 0000:b3:02.2 cannot be used 00:13:20.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.747 EAL: Requested device 0000:b3:02.3 cannot be used 00:13:20.747 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b3:02.4 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b3:02.5 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b3:02.6 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b3:02.7 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:01.0 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:01.1 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:01.2 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:01.3 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:01.4 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:01.5 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:01.6 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:01.7 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:02.0 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:02.1 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:02.2 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:02.3 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:02.4 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:02.5 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:02.6 cannot be used 00:13:20.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.748 EAL: Requested device 0000:b5:02.7 cannot be used 00:13:20.748 [2024-07-24 18:16:29.174315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.748 [2024-07-24 18:16:29.247989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.748 [2024-07-24 18:16:29.299948] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.748 [2024-07-24 18:16:29.299974] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.314 18:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.314 18:16:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:21.314 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:21.314 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:21.314 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:21.314 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:21.314 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:21.314 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:21.314 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:21.314 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:21.314 18:16:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:21.572 malloc1 00:13:21.572 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:21.831 [2024-07-24 18:16:30.195950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:21.831 [2024-07-24 18:16:30.195985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.831 [2024-07-24 18:16:30.196000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf07cb0 00:13:21.831 [2024-07-24 18:16:30.196025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.831 [2024-07-24 18:16:30.197192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.831 [2024-07-24 18:16:30.197216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:21.831 pt1 00:13:21.831 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:21.831 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:21.831 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:21.831 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:21.831 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:21.831 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:21.831 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:21.831 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:21.831 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:21.831 malloc2 00:13:21.831 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:22.090 [2024-07-24 18:16:30.544600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:22.090 [2024-07-24 18:16:30.544636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.090 [2024-07-24 18:16:30.544648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf090b0 00:13:22.090 [2024-07-24 18:16:30.544672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.090 [2024-07-24 18:16:30.545765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.090 [2024-07-24 18:16:30.545787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:22.090 pt2 00:13:22.090 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:22.090 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:22.090 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:13:22.090 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:13:22.090 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:22.090 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:22.090 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:22.090 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:22.090 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:22.349 malloc3 00:13:22.349 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:22.349 [2024-07-24 18:16:30.881112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:22.349 [2024-07-24 18:16:30.881143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.349 [2024-07-24 18:16:30.881154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x109fa80 00:13:22.349 [2024-07-24 18:16:30.881178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.349 [2024-07-24 18:16:30.882146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.349 [2024-07-24 18:16:30.882167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:22.349 pt3 00:13:22.349 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:22.349 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:22.349 18:16:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:13:22.608 [2024-07-24 18:16:31.053579] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:22.608 [2024-07-24 18:16:31.054414] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:22.608 [2024-07-24 18:16:31.054451] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:22.608 [2024-07-24 18:16:31.054551] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xf005e0 00:13:22.608 [2024-07-24 18:16:31.054558] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:22.608 [2024-07-24 18:16:31.054710] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf07980 00:13:22.608 [2024-07-24 18:16:31.054807] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf005e0 00:13:22.608 [2024-07-24 18:16:31.054813] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf005e0 00:13:22.608 [2024-07-24 18:16:31.054875] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.608 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:22.608 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:22.608 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:22.608 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:22.608 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:22.608 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:22.608 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:22.608 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:22.608 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:22.608 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:22.608 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.608 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.867 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.867 "name": "raid_bdev1", 00:13:22.867 "uuid": "38d7e14f-8bb2-492f-86c8-8127000a40c2", 00:13:22.867 "strip_size_kb": 64, 00:13:22.867 "state": "online", 00:13:22.867 "raid_level": "concat", 00:13:22.867 "superblock": true, 00:13:22.867 "num_base_bdevs": 3, 00:13:22.867 "num_base_bdevs_discovered": 3, 00:13:22.867 "num_base_bdevs_operational": 3, 00:13:22.867 "base_bdevs_list": [ 00:13:22.867 { 00:13:22.867 "name": "pt1", 00:13:22.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:22.867 "is_configured": true, 00:13:22.867 "data_offset": 2048, 00:13:22.867 "data_size": 63488 00:13:22.867 }, 00:13:22.867 { 00:13:22.867 "name": "pt2", 00:13:22.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:22.867 "is_configured": true, 00:13:22.867 "data_offset": 2048, 00:13:22.867 "data_size": 63488 00:13:22.867 }, 00:13:22.867 { 00:13:22.867 "name": "pt3", 00:13:22.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:22.867 "is_configured": true, 00:13:22.867 "data_offset": 2048, 00:13:22.867 "data_size": 63488 00:13:22.867 } 00:13:22.867 ] 00:13:22.867 }' 00:13:22.867 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.867 18:16:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.126 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:23.126 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:23.126 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:23.126 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:23.126 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:23.126 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:23.126 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:23.126 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:23.385 [2024-07-24 18:16:31.859813] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.385 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:23.385 "name": "raid_bdev1", 00:13:23.385 "aliases": [ 00:13:23.385 "38d7e14f-8bb2-492f-86c8-8127000a40c2" 00:13:23.385 ], 00:13:23.385 "product_name": "Raid Volume", 00:13:23.385 "block_size": 512, 00:13:23.385 "num_blocks": 190464, 00:13:23.385 "uuid": "38d7e14f-8bb2-492f-86c8-8127000a40c2", 00:13:23.385 "assigned_rate_limits": { 00:13:23.385 "rw_ios_per_sec": 0, 00:13:23.385 "rw_mbytes_per_sec": 0, 00:13:23.385 "r_mbytes_per_sec": 0, 00:13:23.385 "w_mbytes_per_sec": 0 00:13:23.385 }, 00:13:23.385 "claimed": false, 00:13:23.385 "zoned": false, 00:13:23.385 "supported_io_types": { 00:13:23.385 "read": true, 00:13:23.385 "write": true, 00:13:23.385 "unmap": true, 00:13:23.385 "flush": true, 00:13:23.385 "reset": true, 00:13:23.385 "nvme_admin": false, 00:13:23.385 "nvme_io": false, 00:13:23.385 "nvme_io_md": false, 00:13:23.385 "write_zeroes": true, 00:13:23.385 "zcopy": false, 00:13:23.385 "get_zone_info": false, 00:13:23.385 "zone_management": false, 00:13:23.385 "zone_append": false, 00:13:23.385 "compare": false, 00:13:23.385 "compare_and_write": false, 00:13:23.385 "abort": false, 00:13:23.385 "seek_hole": false, 00:13:23.385 "seek_data": false, 00:13:23.385 "copy": false, 00:13:23.385 "nvme_iov_md": false 00:13:23.385 }, 00:13:23.385 "memory_domains": [ 00:13:23.385 { 00:13:23.385 "dma_device_id": "system", 00:13:23.385 "dma_device_type": 1 00:13:23.385 }, 00:13:23.385 { 00:13:23.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.385 "dma_device_type": 2 00:13:23.385 }, 00:13:23.385 { 00:13:23.385 "dma_device_id": "system", 00:13:23.385 "dma_device_type": 1 00:13:23.385 }, 00:13:23.385 { 00:13:23.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.385 "dma_device_type": 2 00:13:23.385 }, 00:13:23.385 { 00:13:23.385 "dma_device_id": "system", 00:13:23.385 "dma_device_type": 1 00:13:23.385 }, 00:13:23.385 { 00:13:23.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.385 "dma_device_type": 2 00:13:23.385 } 00:13:23.385 ], 00:13:23.385 "driver_specific": { 00:13:23.385 "raid": { 00:13:23.385 "uuid": "38d7e14f-8bb2-492f-86c8-8127000a40c2", 00:13:23.385 "strip_size_kb": 64, 00:13:23.385 "state": "online", 00:13:23.385 "raid_level": "concat", 00:13:23.385 "superblock": true, 00:13:23.385 "num_base_bdevs": 3, 00:13:23.385 "num_base_bdevs_discovered": 3, 00:13:23.385 "num_base_bdevs_operational": 3, 00:13:23.385 "base_bdevs_list": [ 00:13:23.385 { 00:13:23.385 "name": "pt1", 00:13:23.385 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:23.385 "is_configured": true, 00:13:23.385 "data_offset": 2048, 00:13:23.385 "data_size": 63488 00:13:23.385 }, 00:13:23.385 { 00:13:23.385 "name": "pt2", 00:13:23.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:23.385 "is_configured": true, 00:13:23.385 "data_offset": 2048, 00:13:23.385 "data_size": 63488 00:13:23.385 }, 00:13:23.385 { 00:13:23.385 "name": "pt3", 00:13:23.385 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:23.385 "is_configured": true, 00:13:23.385 "data_offset": 2048, 00:13:23.385 "data_size": 63488 00:13:23.385 } 00:13:23.385 ] 00:13:23.385 } 00:13:23.385 } 00:13:23.385 }' 00:13:23.385 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:23.385 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:23.385 pt2 00:13:23.385 pt3' 00:13:23.385 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:23.385 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:23.385 18:16:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:23.644 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:23.644 "name": "pt1", 00:13:23.644 "aliases": [ 00:13:23.644 "00000000-0000-0000-0000-000000000001" 00:13:23.644 ], 00:13:23.644 "product_name": "passthru", 00:13:23.644 "block_size": 512, 00:13:23.644 "num_blocks": 65536, 00:13:23.644 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:23.644 "assigned_rate_limits": { 00:13:23.644 "rw_ios_per_sec": 0, 00:13:23.644 "rw_mbytes_per_sec": 0, 00:13:23.644 "r_mbytes_per_sec": 0, 00:13:23.644 "w_mbytes_per_sec": 0 00:13:23.644 }, 00:13:23.644 "claimed": true, 00:13:23.644 "claim_type": "exclusive_write", 00:13:23.644 "zoned": false, 00:13:23.644 "supported_io_types": { 00:13:23.644 "read": true, 00:13:23.644 "write": true, 00:13:23.644 "unmap": true, 00:13:23.644 "flush": true, 00:13:23.644 "reset": true, 00:13:23.644 "nvme_admin": false, 00:13:23.644 "nvme_io": false, 00:13:23.644 "nvme_io_md": false, 00:13:23.644 "write_zeroes": true, 00:13:23.644 "zcopy": true, 00:13:23.644 "get_zone_info": false, 00:13:23.644 "zone_management": false, 00:13:23.644 "zone_append": false, 00:13:23.644 "compare": false, 00:13:23.644 "compare_and_write": false, 00:13:23.644 "abort": true, 00:13:23.644 "seek_hole": false, 00:13:23.644 "seek_data": false, 00:13:23.644 "copy": true, 00:13:23.644 "nvme_iov_md": false 00:13:23.644 }, 00:13:23.644 "memory_domains": [ 00:13:23.644 { 00:13:23.644 "dma_device_id": "system", 00:13:23.644 "dma_device_type": 1 00:13:23.644 }, 00:13:23.644 { 00:13:23.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.644 "dma_device_type": 2 00:13:23.644 } 00:13:23.644 ], 00:13:23.644 "driver_specific": { 00:13:23.644 "passthru": { 00:13:23.644 "name": "pt1", 00:13:23.644 "base_bdev_name": "malloc1" 00:13:23.644 } 00:13:23.644 } 00:13:23.644 }' 00:13:23.644 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:23.644 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:23.644 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:23.644 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:23.644 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:23.919 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:23.919 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:23.919 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:23.919 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:23.919 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:23.919 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:23.919 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:23.919 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:23.919 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:23.919 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:24.195 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:24.195 "name": "pt2", 00:13:24.195 "aliases": [ 00:13:24.195 "00000000-0000-0000-0000-000000000002" 00:13:24.195 ], 00:13:24.195 "product_name": "passthru", 00:13:24.195 "block_size": 512, 00:13:24.195 "num_blocks": 65536, 00:13:24.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.195 "assigned_rate_limits": { 00:13:24.195 "rw_ios_per_sec": 0, 00:13:24.195 "rw_mbytes_per_sec": 0, 00:13:24.195 "r_mbytes_per_sec": 0, 00:13:24.195 "w_mbytes_per_sec": 0 00:13:24.195 }, 00:13:24.195 "claimed": true, 00:13:24.195 "claim_type": "exclusive_write", 00:13:24.195 "zoned": false, 00:13:24.195 "supported_io_types": { 00:13:24.195 "read": true, 00:13:24.195 "write": true, 00:13:24.195 "unmap": true, 00:13:24.195 "flush": true, 00:13:24.195 "reset": true, 00:13:24.195 "nvme_admin": false, 00:13:24.195 "nvme_io": false, 00:13:24.195 "nvme_io_md": false, 00:13:24.195 "write_zeroes": true, 00:13:24.195 "zcopy": true, 00:13:24.195 "get_zone_info": false, 00:13:24.195 "zone_management": false, 00:13:24.195 "zone_append": false, 00:13:24.195 "compare": false, 00:13:24.195 "compare_and_write": false, 00:13:24.195 "abort": true, 00:13:24.195 "seek_hole": false, 00:13:24.195 "seek_data": false, 00:13:24.195 "copy": true, 00:13:24.195 "nvme_iov_md": false 00:13:24.195 }, 00:13:24.195 "memory_domains": [ 00:13:24.195 { 00:13:24.195 "dma_device_id": "system", 00:13:24.195 "dma_device_type": 1 00:13:24.195 }, 00:13:24.195 { 00:13:24.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.195 "dma_device_type": 2 00:13:24.195 } 00:13:24.195 ], 00:13:24.195 "driver_specific": { 00:13:24.195 "passthru": { 00:13:24.195 "name": "pt2", 00:13:24.195 "base_bdev_name": "malloc2" 00:13:24.195 } 00:13:24.195 } 00:13:24.195 }' 00:13:24.195 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.195 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.195 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:24.195 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:24.195 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:24.195 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:24.195 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:24.195 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:24.453 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:24.453 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:24.453 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:24.453 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:24.453 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:24.454 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:24.454 18:16:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:24.712 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:24.712 "name": "pt3", 00:13:24.712 "aliases": [ 00:13:24.712 "00000000-0000-0000-0000-000000000003" 00:13:24.712 ], 00:13:24.712 "product_name": "passthru", 00:13:24.712 "block_size": 512, 00:13:24.712 "num_blocks": 65536, 00:13:24.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:24.712 "assigned_rate_limits": { 00:13:24.712 "rw_ios_per_sec": 0, 00:13:24.712 "rw_mbytes_per_sec": 0, 00:13:24.712 "r_mbytes_per_sec": 0, 00:13:24.712 "w_mbytes_per_sec": 0 00:13:24.712 }, 00:13:24.712 "claimed": true, 00:13:24.712 "claim_type": "exclusive_write", 00:13:24.712 "zoned": false, 00:13:24.712 "supported_io_types": { 00:13:24.712 "read": true, 00:13:24.712 "write": true, 00:13:24.712 "unmap": true, 00:13:24.712 "flush": true, 00:13:24.712 "reset": true, 00:13:24.712 "nvme_admin": false, 00:13:24.712 "nvme_io": false, 00:13:24.712 "nvme_io_md": false, 00:13:24.712 "write_zeroes": true, 00:13:24.712 "zcopy": true, 00:13:24.712 "get_zone_info": false, 00:13:24.712 "zone_management": false, 00:13:24.712 "zone_append": false, 00:13:24.712 "compare": false, 00:13:24.712 "compare_and_write": false, 00:13:24.712 "abort": true, 00:13:24.712 "seek_hole": false, 00:13:24.712 "seek_data": false, 00:13:24.712 "copy": true, 00:13:24.712 "nvme_iov_md": false 00:13:24.712 }, 00:13:24.712 "memory_domains": [ 00:13:24.712 { 00:13:24.712 "dma_device_id": "system", 00:13:24.712 "dma_device_type": 1 00:13:24.712 }, 00:13:24.712 { 00:13:24.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.712 "dma_device_type": 2 00:13:24.712 } 00:13:24.712 ], 00:13:24.712 "driver_specific": { 00:13:24.712 "passthru": { 00:13:24.712 "name": "pt3", 00:13:24.712 "base_bdev_name": "malloc3" 00:13:24.712 } 00:13:24.712 } 00:13:24.712 }' 00:13:24.712 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.712 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.712 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:24.712 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:24.712 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:24.712 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:24.712 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:24.712 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:24.712 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:24.712 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:24.982 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:24.982 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:24.982 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:24.982 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:24.982 [2024-07-24 18:16:33.504054] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.982 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=38d7e14f-8bb2-492f-86c8-8127000a40c2 00:13:24.982 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 38d7e14f-8bb2-492f-86c8-8127000a40c2 ']' 00:13:24.982 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:25.244 [2024-07-24 18:16:33.676328] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.244 [2024-07-24 18:16:33.676341] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.244 [2024-07-24 18:16:33.676378] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.244 [2024-07-24 18:16:33.676418] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.244 [2024-07-24 18:16:33.676425] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf005e0 name raid_bdev1, state offline 00:13:25.244 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:25.244 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.502 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:25.502 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:25.502 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.502 18:16:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:25.502 18:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.502 18:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:25.760 18:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.760 18:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:26.019 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:26.277 [2024-07-24 18:16:34.686911] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:26.277 [2024-07-24 18:16:34.687847] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:26.277 [2024-07-24 18:16:34.687880] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:26.277 [2024-07-24 18:16:34.687913] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:26.277 [2024-07-24 18:16:34.687939] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:26.277 [2024-07-24 18:16:34.687969] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:26.277 [2024-07-24 18:16:34.687981] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:26.277 [2024-07-24 18:16:34.687988] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10ab730 name raid_bdev1, state configuring 00:13:26.277 request: 00:13:26.277 { 00:13:26.277 "name": "raid_bdev1", 00:13:26.277 "raid_level": "concat", 00:13:26.277 "base_bdevs": [ 00:13:26.277 "malloc1", 00:13:26.277 "malloc2", 00:13:26.277 "malloc3" 00:13:26.277 ], 00:13:26.277 "strip_size_kb": 64, 00:13:26.277 "superblock": false, 00:13:26.277 "method": "bdev_raid_create", 00:13:26.277 "req_id": 1 00:13:26.277 } 00:13:26.278 Got JSON-RPC error response 00:13:26.278 response: 00:13:26.278 { 00:13:26.278 "code": -17, 00:13:26.278 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:26.278 } 00:13:26.278 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:26.278 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:26.278 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:26.278 18:16:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:26.278 18:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.278 18:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:26.536 18:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:26.536 18:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:26.536 18:16:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:26.536 [2024-07-24 18:16:35.027769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:26.536 [2024-07-24 18:16:35.027806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.536 [2024-07-24 18:16:35.027837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf07ee0 00:13:26.536 [2024-07-24 18:16:35.027845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.536 [2024-07-24 18:16:35.029002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.536 [2024-07-24 18:16:35.029024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:26.536 [2024-07-24 18:16:35.029071] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:26.536 [2024-07-24 18:16:35.029089] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:26.536 pt1 00:13:26.536 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:26.536 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:26.536 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:26.536 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:26.536 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:26.536 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:26.536 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:26.536 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:26.536 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:26.536 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:26.536 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.536 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.795 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:26.795 "name": "raid_bdev1", 00:13:26.795 "uuid": "38d7e14f-8bb2-492f-86c8-8127000a40c2", 00:13:26.795 "strip_size_kb": 64, 00:13:26.795 "state": "configuring", 00:13:26.795 "raid_level": "concat", 00:13:26.795 "superblock": true, 00:13:26.795 "num_base_bdevs": 3, 00:13:26.795 "num_base_bdevs_discovered": 1, 00:13:26.795 "num_base_bdevs_operational": 3, 00:13:26.795 "base_bdevs_list": [ 00:13:26.795 { 00:13:26.795 "name": "pt1", 00:13:26.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.795 "is_configured": true, 00:13:26.795 "data_offset": 2048, 00:13:26.795 "data_size": 63488 00:13:26.795 }, 00:13:26.795 { 00:13:26.795 "name": null, 00:13:26.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.795 "is_configured": false, 00:13:26.795 "data_offset": 2048, 00:13:26.795 "data_size": 63488 00:13:26.795 }, 00:13:26.795 { 00:13:26.795 "name": null, 00:13:26.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:26.795 "is_configured": false, 00:13:26.795 "data_offset": 2048, 00:13:26.795 "data_size": 63488 00:13:26.795 } 00:13:26.795 ] 00:13:26.795 }' 00:13:26.795 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:26.795 18:16:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.361 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:13:27.361 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:27.361 [2024-07-24 18:16:35.817812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:27.361 [2024-07-24 18:16:35.817844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.361 [2024-07-24 18:16:35.817856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeff3b0 00:13:27.361 [2024-07-24 18:16:35.817880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.361 [2024-07-24 18:16:35.818124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.361 [2024-07-24 18:16:35.818137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:27.361 [2024-07-24 18:16:35.818178] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:27.361 [2024-07-24 18:16:35.818191] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:27.361 pt2 00:13:27.361 18:16:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:27.619 [2024-07-24 18:16:35.986256] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:27.619 "name": "raid_bdev1", 00:13:27.619 "uuid": "38d7e14f-8bb2-492f-86c8-8127000a40c2", 00:13:27.619 "strip_size_kb": 64, 00:13:27.619 "state": "configuring", 00:13:27.619 "raid_level": "concat", 00:13:27.619 "superblock": true, 00:13:27.619 "num_base_bdevs": 3, 00:13:27.619 "num_base_bdevs_discovered": 1, 00:13:27.619 "num_base_bdevs_operational": 3, 00:13:27.619 "base_bdevs_list": [ 00:13:27.619 { 00:13:27.619 "name": "pt1", 00:13:27.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:27.619 "is_configured": true, 00:13:27.619 "data_offset": 2048, 00:13:27.619 "data_size": 63488 00:13:27.619 }, 00:13:27.619 { 00:13:27.619 "name": null, 00:13:27.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:27.619 "is_configured": false, 00:13:27.619 "data_offset": 2048, 00:13:27.619 "data_size": 63488 00:13:27.619 }, 00:13:27.619 { 00:13:27.619 "name": null, 00:13:27.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:27.619 "is_configured": false, 00:13:27.619 "data_offset": 2048, 00:13:27.619 "data_size": 63488 00:13:27.619 } 00:13:27.619 ] 00:13:27.619 }' 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:27.619 18:16:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.186 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:28.186 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:28.186 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:28.444 [2024-07-24 18:16:36.840455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:28.444 [2024-07-24 18:16:36.840492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.445 [2024-07-24 18:16:36.840525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf08150 00:13:28.445 [2024-07-24 18:16:36.840535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.445 [2024-07-24 18:16:36.840799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.445 [2024-07-24 18:16:36.840811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:28.445 [2024-07-24 18:16:36.840854] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:28.445 [2024-07-24 18:16:36.840867] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:28.445 pt2 00:13:28.445 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:28.445 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:28.445 18:16:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:28.445 [2024-07-24 18:16:37.020921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:28.445 [2024-07-24 18:16:37.020942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.445 [2024-07-24 18:16:37.020952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a1e80 00:13:28.445 [2024-07-24 18:16:37.020959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.445 [2024-07-24 18:16:37.021163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.445 [2024-07-24 18:16:37.021175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:28.445 [2024-07-24 18:16:37.021209] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:28.445 [2024-07-24 18:16:37.021220] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:28.445 [2024-07-24 18:16:37.021289] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x10a2340 00:13:28.445 [2024-07-24 18:16:37.021295] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:28.445 [2024-07-24 18:16:37.021407] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf07180 00:13:28.445 [2024-07-24 18:16:37.021486] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10a2340 00:13:28.445 [2024-07-24 18:16:37.021496] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10a2340 00:13:28.445 [2024-07-24 18:16:37.021559] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.445 pt3 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.703 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:28.703 "name": "raid_bdev1", 00:13:28.703 "uuid": "38d7e14f-8bb2-492f-86c8-8127000a40c2", 00:13:28.703 "strip_size_kb": 64, 00:13:28.703 "state": "online", 00:13:28.703 "raid_level": "concat", 00:13:28.703 "superblock": true, 00:13:28.703 "num_base_bdevs": 3, 00:13:28.703 "num_base_bdevs_discovered": 3, 00:13:28.703 "num_base_bdevs_operational": 3, 00:13:28.703 "base_bdevs_list": [ 00:13:28.703 { 00:13:28.703 "name": "pt1", 00:13:28.703 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:28.703 "is_configured": true, 00:13:28.703 "data_offset": 2048, 00:13:28.703 "data_size": 63488 00:13:28.703 }, 00:13:28.703 { 00:13:28.703 "name": "pt2", 00:13:28.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:28.704 "is_configured": true, 00:13:28.704 "data_offset": 2048, 00:13:28.704 "data_size": 63488 00:13:28.704 }, 00:13:28.704 { 00:13:28.704 "name": "pt3", 00:13:28.704 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:28.704 "is_configured": true, 00:13:28.704 "data_offset": 2048, 00:13:28.704 "data_size": 63488 00:13:28.704 } 00:13:28.704 ] 00:13:28.704 }' 00:13:28.704 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:28.704 18:16:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.270 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:29.270 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:29.270 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:29.270 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:29.270 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:29.270 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:29.270 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:29.270 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:29.270 [2024-07-24 18:16:37.855242] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.528 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:29.528 "name": "raid_bdev1", 00:13:29.528 "aliases": [ 00:13:29.528 "38d7e14f-8bb2-492f-86c8-8127000a40c2" 00:13:29.528 ], 00:13:29.528 "product_name": "Raid Volume", 00:13:29.528 "block_size": 512, 00:13:29.528 "num_blocks": 190464, 00:13:29.528 "uuid": "38d7e14f-8bb2-492f-86c8-8127000a40c2", 00:13:29.528 "assigned_rate_limits": { 00:13:29.528 "rw_ios_per_sec": 0, 00:13:29.528 "rw_mbytes_per_sec": 0, 00:13:29.528 "r_mbytes_per_sec": 0, 00:13:29.528 "w_mbytes_per_sec": 0 00:13:29.528 }, 00:13:29.528 "claimed": false, 00:13:29.528 "zoned": false, 00:13:29.529 "supported_io_types": { 00:13:29.529 "read": true, 00:13:29.529 "write": true, 00:13:29.529 "unmap": true, 00:13:29.529 "flush": true, 00:13:29.529 "reset": true, 00:13:29.529 "nvme_admin": false, 00:13:29.529 "nvme_io": false, 00:13:29.529 "nvme_io_md": false, 00:13:29.529 "write_zeroes": true, 00:13:29.529 "zcopy": false, 00:13:29.529 "get_zone_info": false, 00:13:29.529 "zone_management": false, 00:13:29.529 "zone_append": false, 00:13:29.529 "compare": false, 00:13:29.529 "compare_and_write": false, 00:13:29.529 "abort": false, 00:13:29.529 "seek_hole": false, 00:13:29.529 "seek_data": false, 00:13:29.529 "copy": false, 00:13:29.529 "nvme_iov_md": false 00:13:29.529 }, 00:13:29.529 "memory_domains": [ 00:13:29.529 { 00:13:29.529 "dma_device_id": "system", 00:13:29.529 "dma_device_type": 1 00:13:29.529 }, 00:13:29.529 { 00:13:29.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.529 "dma_device_type": 2 00:13:29.529 }, 00:13:29.529 { 00:13:29.529 "dma_device_id": "system", 00:13:29.529 "dma_device_type": 1 00:13:29.529 }, 00:13:29.529 { 00:13:29.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.529 "dma_device_type": 2 00:13:29.529 }, 00:13:29.529 { 00:13:29.529 "dma_device_id": "system", 00:13:29.529 "dma_device_type": 1 00:13:29.529 }, 00:13:29.529 { 00:13:29.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.529 "dma_device_type": 2 00:13:29.529 } 00:13:29.529 ], 00:13:29.529 "driver_specific": { 00:13:29.529 "raid": { 00:13:29.529 "uuid": "38d7e14f-8bb2-492f-86c8-8127000a40c2", 00:13:29.529 "strip_size_kb": 64, 00:13:29.529 "state": "online", 00:13:29.529 "raid_level": "concat", 00:13:29.529 "superblock": true, 00:13:29.529 "num_base_bdevs": 3, 00:13:29.529 "num_base_bdevs_discovered": 3, 00:13:29.529 "num_base_bdevs_operational": 3, 00:13:29.529 "base_bdevs_list": [ 00:13:29.529 { 00:13:29.529 "name": "pt1", 00:13:29.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:29.529 "is_configured": true, 00:13:29.529 "data_offset": 2048, 00:13:29.529 "data_size": 63488 00:13:29.529 }, 00:13:29.529 { 00:13:29.529 "name": "pt2", 00:13:29.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:29.529 "is_configured": true, 00:13:29.529 "data_offset": 2048, 00:13:29.529 "data_size": 63488 00:13:29.529 }, 00:13:29.529 { 00:13:29.529 "name": "pt3", 00:13:29.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:29.529 "is_configured": true, 00:13:29.529 "data_offset": 2048, 00:13:29.529 "data_size": 63488 00:13:29.529 } 00:13:29.529 ] 00:13:29.529 } 00:13:29.529 } 00:13:29.529 }' 00:13:29.529 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:29.529 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:29.529 pt2 00:13:29.529 pt3' 00:13:29.529 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:29.529 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:29.529 18:16:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:29.529 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:29.529 "name": "pt1", 00:13:29.529 "aliases": [ 00:13:29.529 "00000000-0000-0000-0000-000000000001" 00:13:29.529 ], 00:13:29.529 "product_name": "passthru", 00:13:29.529 "block_size": 512, 00:13:29.529 "num_blocks": 65536, 00:13:29.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:29.529 "assigned_rate_limits": { 00:13:29.529 "rw_ios_per_sec": 0, 00:13:29.529 "rw_mbytes_per_sec": 0, 00:13:29.529 "r_mbytes_per_sec": 0, 00:13:29.529 "w_mbytes_per_sec": 0 00:13:29.529 }, 00:13:29.529 "claimed": true, 00:13:29.529 "claim_type": "exclusive_write", 00:13:29.529 "zoned": false, 00:13:29.529 "supported_io_types": { 00:13:29.529 "read": true, 00:13:29.529 "write": true, 00:13:29.529 "unmap": true, 00:13:29.529 "flush": true, 00:13:29.529 "reset": true, 00:13:29.529 "nvme_admin": false, 00:13:29.529 "nvme_io": false, 00:13:29.529 "nvme_io_md": false, 00:13:29.529 "write_zeroes": true, 00:13:29.529 "zcopy": true, 00:13:29.529 "get_zone_info": false, 00:13:29.529 "zone_management": false, 00:13:29.529 "zone_append": false, 00:13:29.529 "compare": false, 00:13:29.529 "compare_and_write": false, 00:13:29.529 "abort": true, 00:13:29.529 "seek_hole": false, 00:13:29.529 "seek_data": false, 00:13:29.529 "copy": true, 00:13:29.529 "nvme_iov_md": false 00:13:29.529 }, 00:13:29.529 "memory_domains": [ 00:13:29.529 { 00:13:29.529 "dma_device_id": "system", 00:13:29.529 "dma_device_type": 1 00:13:29.529 }, 00:13:29.529 { 00:13:29.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.529 "dma_device_type": 2 00:13:29.529 } 00:13:29.529 ], 00:13:29.529 "driver_specific": { 00:13:29.529 "passthru": { 00:13:29.529 "name": "pt1", 00:13:29.529 "base_bdev_name": "malloc1" 00:13:29.529 } 00:13:29.529 } 00:13:29.529 }' 00:13:29.529 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:29.787 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:29.787 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:29.787 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:29.787 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:29.787 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:29.788 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:29.788 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:29.788 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:29.788 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:29.788 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:30.046 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:30.046 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:30.046 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:30.046 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:30.046 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:30.046 "name": "pt2", 00:13:30.046 "aliases": [ 00:13:30.046 "00000000-0000-0000-0000-000000000002" 00:13:30.046 ], 00:13:30.046 "product_name": "passthru", 00:13:30.046 "block_size": 512, 00:13:30.046 "num_blocks": 65536, 00:13:30.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:30.046 "assigned_rate_limits": { 00:13:30.046 "rw_ios_per_sec": 0, 00:13:30.046 "rw_mbytes_per_sec": 0, 00:13:30.046 "r_mbytes_per_sec": 0, 00:13:30.046 "w_mbytes_per_sec": 0 00:13:30.046 }, 00:13:30.046 "claimed": true, 00:13:30.046 "claim_type": "exclusive_write", 00:13:30.046 "zoned": false, 00:13:30.046 "supported_io_types": { 00:13:30.046 "read": true, 00:13:30.046 "write": true, 00:13:30.046 "unmap": true, 00:13:30.046 "flush": true, 00:13:30.046 "reset": true, 00:13:30.046 "nvme_admin": false, 00:13:30.046 "nvme_io": false, 00:13:30.046 "nvme_io_md": false, 00:13:30.046 "write_zeroes": true, 00:13:30.046 "zcopy": true, 00:13:30.046 "get_zone_info": false, 00:13:30.046 "zone_management": false, 00:13:30.046 "zone_append": false, 00:13:30.046 "compare": false, 00:13:30.046 "compare_and_write": false, 00:13:30.046 "abort": true, 00:13:30.046 "seek_hole": false, 00:13:30.046 "seek_data": false, 00:13:30.046 "copy": true, 00:13:30.046 "nvme_iov_md": false 00:13:30.046 }, 00:13:30.046 "memory_domains": [ 00:13:30.046 { 00:13:30.046 "dma_device_id": "system", 00:13:30.046 "dma_device_type": 1 00:13:30.046 }, 00:13:30.046 { 00:13:30.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.047 "dma_device_type": 2 00:13:30.047 } 00:13:30.047 ], 00:13:30.047 "driver_specific": { 00:13:30.047 "passthru": { 00:13:30.047 "name": "pt2", 00:13:30.047 "base_bdev_name": "malloc2" 00:13:30.047 } 00:13:30.047 } 00:13:30.047 }' 00:13:30.047 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:30.047 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:30.305 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:30.305 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:30.305 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:30.305 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:30.305 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:30.305 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:30.305 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:30.305 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:30.305 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:30.563 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:30.563 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:30.563 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:30.563 18:16:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:30.563 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:30.563 "name": "pt3", 00:13:30.563 "aliases": [ 00:13:30.563 "00000000-0000-0000-0000-000000000003" 00:13:30.563 ], 00:13:30.563 "product_name": "passthru", 00:13:30.563 "block_size": 512, 00:13:30.563 "num_blocks": 65536, 00:13:30.563 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:30.563 "assigned_rate_limits": { 00:13:30.563 "rw_ios_per_sec": 0, 00:13:30.563 "rw_mbytes_per_sec": 0, 00:13:30.563 "r_mbytes_per_sec": 0, 00:13:30.563 "w_mbytes_per_sec": 0 00:13:30.563 }, 00:13:30.563 "claimed": true, 00:13:30.563 "claim_type": "exclusive_write", 00:13:30.563 "zoned": false, 00:13:30.563 "supported_io_types": { 00:13:30.563 "read": true, 00:13:30.563 "write": true, 00:13:30.563 "unmap": true, 00:13:30.563 "flush": true, 00:13:30.563 "reset": true, 00:13:30.563 "nvme_admin": false, 00:13:30.563 "nvme_io": false, 00:13:30.564 "nvme_io_md": false, 00:13:30.564 "write_zeroes": true, 00:13:30.564 "zcopy": true, 00:13:30.564 "get_zone_info": false, 00:13:30.564 "zone_management": false, 00:13:30.564 "zone_append": false, 00:13:30.564 "compare": false, 00:13:30.564 "compare_and_write": false, 00:13:30.564 "abort": true, 00:13:30.564 "seek_hole": false, 00:13:30.564 "seek_data": false, 00:13:30.564 "copy": true, 00:13:30.564 "nvme_iov_md": false 00:13:30.564 }, 00:13:30.564 "memory_domains": [ 00:13:30.564 { 00:13:30.564 "dma_device_id": "system", 00:13:30.564 "dma_device_type": 1 00:13:30.564 }, 00:13:30.564 { 00:13:30.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.564 "dma_device_type": 2 00:13:30.564 } 00:13:30.564 ], 00:13:30.564 "driver_specific": { 00:13:30.564 "passthru": { 00:13:30.564 "name": "pt3", 00:13:30.564 "base_bdev_name": "malloc3" 00:13:30.564 } 00:13:30.564 } 00:13:30.564 }' 00:13:30.564 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:30.564 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:30.822 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:30.822 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:30.822 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:30.822 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:30.822 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:30.822 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:30.822 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:30.822 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:30.822 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:30.822 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:30.822 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:30.822 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:31.081 [2024-07-24 18:16:39.547611] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 38d7e14f-8bb2-492f-86c8-8127000a40c2 '!=' 38d7e14f-8bb2-492f-86c8-8127000a40c2 ']' 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2193368 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2193368 ']' 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2193368 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2193368 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2193368' 00:13:31.081 killing process with pid 2193368 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2193368 00:13:31.081 [2024-07-24 18:16:39.622969] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.081 [2024-07-24 18:16:39.623010] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.081 [2024-07-24 18:16:39.623053] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.081 [2024-07-24 18:16:39.623061] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a2340 name raid_bdev1, state offline 00:13:31.081 18:16:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2193368 00:13:31.081 [2024-07-24 18:16:39.645875] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.340 18:16:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:31.340 00:13:31.340 real 0m10.793s 00:13:31.340 user 0m19.267s 00:13:31.340 sys 0m2.077s 00:13:31.340 18:16:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.340 18:16:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.340 ************************************ 00:13:31.340 END TEST raid_superblock_test 00:13:31.340 ************************************ 00:13:31.340 18:16:39 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:13:31.340 18:16:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:31.340 18:16:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.340 18:16:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:31.340 ************************************ 00:13:31.340 START TEST raid_read_error_test 00:13:31.340 ************************************ 00:13:31.340 18:16:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:13:31.340 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:31.340 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:31.340 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:31.340 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:31.340 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:31.340 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:31.340 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:31.340 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.sHcFcLv5ih 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2195522 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2195522 /var/tmp/spdk-raid.sock 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2195522 ']' 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:31.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:31.341 18:16:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.600 [2024-07-24 18:16:39.970167] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:13:31.600 [2024-07-24 18:16:39.970212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2195522 ] 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:01.0 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:01.1 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:01.2 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:01.3 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:01.4 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:01.5 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:01.6 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:01.7 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:02.0 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:02.1 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:02.2 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:02.3 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:02.4 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:02.5 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:02.6 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b3:02.7 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:01.0 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:01.1 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:01.2 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:01.3 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:01.4 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:01.5 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:01.6 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:01.7 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:02.0 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:02.1 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:02.2 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:02.3 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:02.4 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:02.5 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:02.6 cannot be used 00:13:31.600 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:31.600 EAL: Requested device 0000:b5:02.7 cannot be used 00:13:31.600 [2024-07-24 18:16:40.065453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.600 [2024-07-24 18:16:40.141636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.600 [2024-07-24 18:16:40.195163] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.600 [2024-07-24 18:16:40.195191] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.536 18:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:32.536 18:16:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:32.536 18:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:32.536 18:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:32.536 BaseBdev1_malloc 00:13:32.536 18:16:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:32.536 true 00:13:32.536 18:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:32.795 [2024-07-24 18:16:41.239598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:32.795 [2024-07-24 18:16:41.239637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.795 [2024-07-24 18:16:41.239651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b30ed0 00:13:32.795 [2024-07-24 18:16:41.239690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.795 [2024-07-24 18:16:41.240900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.795 [2024-07-24 18:16:41.240922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:32.795 BaseBdev1 00:13:32.795 18:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:32.795 18:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:33.053 BaseBdev2_malloc 00:13:33.053 18:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:33.053 true 00:13:33.053 18:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:33.311 [2024-07-24 18:16:41.728486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:33.311 [2024-07-24 18:16:41.728520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.311 [2024-07-24 18:16:41.728533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b35b60 00:13:33.311 [2024-07-24 18:16:41.728562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.311 [2024-07-24 18:16:41.729643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.311 [2024-07-24 18:16:41.729665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:33.311 BaseBdev2 00:13:33.311 18:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:33.311 18:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:33.311 BaseBdev3_malloc 00:13:33.570 18:16:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:33.570 true 00:13:33.570 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:33.828 [2024-07-24 18:16:42.237346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:33.828 [2024-07-24 18:16:42.237378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.828 [2024-07-24 18:16:42.237395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b36ad0 00:13:33.828 [2024-07-24 18:16:42.237419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.828 [2024-07-24 18:16:42.238457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.828 [2024-07-24 18:16:42.238478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:33.828 BaseBdev3 00:13:33.828 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:33.828 [2024-07-24 18:16:42.397792] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.828 [2024-07-24 18:16:42.398656] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.828 [2024-07-24 18:16:42.398715] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.828 [2024-07-24 18:16:42.398849] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b388e0 00:13:33.828 [2024-07-24 18:16:42.398856] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:33.828 [2024-07-24 18:16:42.398984] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x198c8b0 00:13:33.828 [2024-07-24 18:16:42.399083] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b388e0 00:13:33.828 [2024-07-24 18:16:42.399089] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b388e0 00:13:33.828 [2024-07-24 18:16:42.399153] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.828 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:33.828 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:33.829 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:33.829 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:33.829 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:33.829 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:33.829 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.829 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.829 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.829 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.829 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.829 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.087 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:34.087 "name": "raid_bdev1", 00:13:34.087 "uuid": "e68f8d3a-20b0-47fe-8576-543aed00505a", 00:13:34.087 "strip_size_kb": 64, 00:13:34.087 "state": "online", 00:13:34.087 "raid_level": "concat", 00:13:34.087 "superblock": true, 00:13:34.087 "num_base_bdevs": 3, 00:13:34.087 "num_base_bdevs_discovered": 3, 00:13:34.087 "num_base_bdevs_operational": 3, 00:13:34.087 "base_bdevs_list": [ 00:13:34.087 { 00:13:34.087 "name": "BaseBdev1", 00:13:34.087 "uuid": "94a4e077-db87-5319-93f5-79e782366ce6", 00:13:34.087 "is_configured": true, 00:13:34.087 "data_offset": 2048, 00:13:34.087 "data_size": 63488 00:13:34.087 }, 00:13:34.087 { 00:13:34.087 "name": "BaseBdev2", 00:13:34.087 "uuid": "01c470ef-bf6d-52b9-898b-77124db38f29", 00:13:34.087 "is_configured": true, 00:13:34.087 "data_offset": 2048, 00:13:34.087 "data_size": 63488 00:13:34.087 }, 00:13:34.087 { 00:13:34.087 "name": "BaseBdev3", 00:13:34.087 "uuid": "b562a826-7d60-551d-86e8-d4d055483746", 00:13:34.087 "is_configured": true, 00:13:34.087 "data_offset": 2048, 00:13:34.087 "data_size": 63488 00:13:34.087 } 00:13:34.087 ] 00:13:34.087 }' 00:13:34.087 18:16:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:34.087 18:16:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.654 18:16:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:34.654 18:16:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:34.654 [2024-07-24 18:16:43.155974] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1689b30 00:13:35.587 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:35.846 "name": "raid_bdev1", 00:13:35.846 "uuid": "e68f8d3a-20b0-47fe-8576-543aed00505a", 00:13:35.846 "strip_size_kb": 64, 00:13:35.846 "state": "online", 00:13:35.846 "raid_level": "concat", 00:13:35.846 "superblock": true, 00:13:35.846 "num_base_bdevs": 3, 00:13:35.846 "num_base_bdevs_discovered": 3, 00:13:35.846 "num_base_bdevs_operational": 3, 00:13:35.846 "base_bdevs_list": [ 00:13:35.846 { 00:13:35.846 "name": "BaseBdev1", 00:13:35.846 "uuid": "94a4e077-db87-5319-93f5-79e782366ce6", 00:13:35.846 "is_configured": true, 00:13:35.846 "data_offset": 2048, 00:13:35.846 "data_size": 63488 00:13:35.846 }, 00:13:35.846 { 00:13:35.846 "name": "BaseBdev2", 00:13:35.846 "uuid": "01c470ef-bf6d-52b9-898b-77124db38f29", 00:13:35.846 "is_configured": true, 00:13:35.846 "data_offset": 2048, 00:13:35.846 "data_size": 63488 00:13:35.846 }, 00:13:35.846 { 00:13:35.846 "name": "BaseBdev3", 00:13:35.846 "uuid": "b562a826-7d60-551d-86e8-d4d055483746", 00:13:35.846 "is_configured": true, 00:13:35.846 "data_offset": 2048, 00:13:35.846 "data_size": 63488 00:13:35.846 } 00:13:35.846 ] 00:13:35.846 }' 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:35.846 18:16:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.413 18:16:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:36.671 [2024-07-24 18:16:45.076186] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.671 [2024-07-24 18:16:45.076215] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.671 [2024-07-24 18:16:45.078270] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.671 [2024-07-24 18:16:45.078295] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.671 [2024-07-24 18:16:45.078317] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.671 [2024-07-24 18:16:45.078324] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b388e0 name raid_bdev1, state offline 00:13:36.671 0 00:13:36.671 18:16:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2195522 00:13:36.671 18:16:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2195522 ']' 00:13:36.671 18:16:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2195522 00:13:36.671 18:16:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:36.671 18:16:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.671 18:16:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2195522 00:13:36.671 18:16:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:36.671 18:16:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:36.671 18:16:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2195522' 00:13:36.671 killing process with pid 2195522 00:13:36.671 18:16:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2195522 00:13:36.671 [2024-07-24 18:16:45.150609] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.671 18:16:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2195522 00:13:36.671 [2024-07-24 18:16:45.168258] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:36.929 18:16:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.sHcFcLv5ih 00:13:36.929 18:16:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:36.929 18:16:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:36.929 18:16:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:13:36.929 18:16:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:36.929 18:16:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:36.929 18:16:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:36.929 18:16:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:13:36.929 00:13:36.929 real 0m5.452s 00:13:36.929 user 0m8.324s 00:13:36.929 sys 0m0.970s 00:13:36.929 18:16:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.929 18:16:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.929 ************************************ 00:13:36.929 END TEST raid_read_error_test 00:13:36.929 ************************************ 00:13:36.929 18:16:45 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:13:36.929 18:16:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:36.929 18:16:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.929 18:16:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:36.929 ************************************ 00:13:36.929 START TEST raid_write_error_test 00:13:36.929 ************************************ 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:36.929 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.acNLadTJw5 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2196536 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2196536 /var/tmp/spdk-raid.sock 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2196536 ']' 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:36.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.930 18:16:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.930 [2024-07-24 18:16:45.512545] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:13:36.930 [2024-07-24 18:16:45.512588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196536 ] 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:01.0 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:01.1 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:01.2 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:01.3 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:01.4 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:01.5 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:01.6 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:01.7 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:02.0 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:02.1 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:02.2 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:02.3 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:02.4 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:02.5 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:02.6 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b3:02.7 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:01.0 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:01.1 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:01.2 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:01.3 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:01.4 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:01.5 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:01.6 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:01.7 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:02.0 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:02.1 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:02.2 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:02.3 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:02.4 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:02.5 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:02.6 cannot be used 00:13:37.188 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:37.188 EAL: Requested device 0000:b5:02.7 cannot be used 00:13:37.188 [2024-07-24 18:16:45.604594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.188 [2024-07-24 18:16:45.674552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.188 [2024-07-24 18:16:45.728423] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.188 [2024-07-24 18:16:45.728455] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.753 18:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.753 18:16:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:37.753 18:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:37.753 18:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:38.012 BaseBdev1_malloc 00:13:38.012 18:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:38.270 true 00:13:38.270 18:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:38.270 [2024-07-24 18:16:46.805088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:38.270 [2024-07-24 18:16:46.805121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.270 [2024-07-24 18:16:46.805134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26c9ed0 00:13:38.270 [2024-07-24 18:16:46.805142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.270 [2024-07-24 18:16:46.806304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.270 [2024-07-24 18:16:46.806327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:38.270 BaseBdev1 00:13:38.270 18:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:38.270 18:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:38.529 BaseBdev2_malloc 00:13:38.529 18:16:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:38.787 true 00:13:38.787 18:16:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:38.787 [2024-07-24 18:16:47.309996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:38.787 [2024-07-24 18:16:47.310028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.787 [2024-07-24 18:16:47.310041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26ceb60 00:13:38.787 [2024-07-24 18:16:47.310049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.787 [2024-07-24 18:16:47.311082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.787 [2024-07-24 18:16:47.311112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:38.787 BaseBdev2 00:13:38.787 18:16:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:38.787 18:16:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:39.045 BaseBdev3_malloc 00:13:39.045 18:16:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:39.320 true 00:13:39.320 18:16:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:39.320 [2024-07-24 18:16:47.831030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:39.320 [2024-07-24 18:16:47.831064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.320 [2024-07-24 18:16:47.831086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26cfad0 00:13:39.320 [2024-07-24 18:16:47.831095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.320 [2024-07-24 18:16:47.832117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.320 [2024-07-24 18:16:47.832140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:39.320 BaseBdev3 00:13:39.320 18:16:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:39.591 [2024-07-24 18:16:48.007502] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.591 [2024-07-24 18:16:48.008370] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:39.591 [2024-07-24 18:16:48.008417] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:39.591 [2024-07-24 18:16:48.008548] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x26d18e0 00:13:39.591 [2024-07-24 18:16:48.008555] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:39.591 [2024-07-24 18:16:48.008707] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25258b0 00:13:39.591 [2024-07-24 18:16:48.008813] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26d18e0 00:13:39.591 [2024-07-24 18:16:48.008820] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26d18e0 00:13:39.591 [2024-07-24 18:16:48.008888] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.591 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:39.591 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:39.591 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:39.591 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:39.591 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:39.591 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:39.591 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:39.591 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:39.591 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:39.591 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:39.591 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.591 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.849 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:39.849 "name": "raid_bdev1", 00:13:39.849 "uuid": "e3bf941b-78ad-48e8-832d-a46211fdc650", 00:13:39.849 "strip_size_kb": 64, 00:13:39.849 "state": "online", 00:13:39.849 "raid_level": "concat", 00:13:39.849 "superblock": true, 00:13:39.849 "num_base_bdevs": 3, 00:13:39.849 "num_base_bdevs_discovered": 3, 00:13:39.849 "num_base_bdevs_operational": 3, 00:13:39.849 "base_bdevs_list": [ 00:13:39.849 { 00:13:39.849 "name": "BaseBdev1", 00:13:39.849 "uuid": "9b748e39-363c-52ff-bd46-5fab0bc17467", 00:13:39.849 "is_configured": true, 00:13:39.849 "data_offset": 2048, 00:13:39.849 "data_size": 63488 00:13:39.849 }, 00:13:39.849 { 00:13:39.849 "name": "BaseBdev2", 00:13:39.849 "uuid": "3355a1af-816f-59fe-8e29-e373a412b705", 00:13:39.849 "is_configured": true, 00:13:39.849 "data_offset": 2048, 00:13:39.849 "data_size": 63488 00:13:39.849 }, 00:13:39.849 { 00:13:39.849 "name": "BaseBdev3", 00:13:39.849 "uuid": "bcce39c6-ace9-5d01-8715-9f86a3781b5e", 00:13:39.849 "is_configured": true, 00:13:39.849 "data_offset": 2048, 00:13:39.849 "data_size": 63488 00:13:39.849 } 00:13:39.849 ] 00:13:39.849 }' 00:13:39.849 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:39.849 18:16:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.106 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:40.106 18:16:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:40.365 [2024-07-24 18:16:48.741748] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2222b30 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.301 18:16:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.559 18:16:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:41.559 "name": "raid_bdev1", 00:13:41.559 "uuid": "e3bf941b-78ad-48e8-832d-a46211fdc650", 00:13:41.559 "strip_size_kb": 64, 00:13:41.559 "state": "online", 00:13:41.559 "raid_level": "concat", 00:13:41.559 "superblock": true, 00:13:41.559 "num_base_bdevs": 3, 00:13:41.559 "num_base_bdevs_discovered": 3, 00:13:41.559 "num_base_bdevs_operational": 3, 00:13:41.559 "base_bdevs_list": [ 00:13:41.559 { 00:13:41.559 "name": "BaseBdev1", 00:13:41.559 "uuid": "9b748e39-363c-52ff-bd46-5fab0bc17467", 00:13:41.559 "is_configured": true, 00:13:41.559 "data_offset": 2048, 00:13:41.559 "data_size": 63488 00:13:41.559 }, 00:13:41.559 { 00:13:41.559 "name": "BaseBdev2", 00:13:41.559 "uuid": "3355a1af-816f-59fe-8e29-e373a412b705", 00:13:41.559 "is_configured": true, 00:13:41.559 "data_offset": 2048, 00:13:41.559 "data_size": 63488 00:13:41.559 }, 00:13:41.559 { 00:13:41.559 "name": "BaseBdev3", 00:13:41.559 "uuid": "bcce39c6-ace9-5d01-8715-9f86a3781b5e", 00:13:41.559 "is_configured": true, 00:13:41.559 "data_offset": 2048, 00:13:41.559 "data_size": 63488 00:13:41.559 } 00:13:41.559 ] 00:13:41.559 }' 00:13:41.559 18:16:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:41.559 18:16:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.127 18:16:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:42.127 [2024-07-24 18:16:50.658306] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.127 [2024-07-24 18:16:50.658336] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.127 [2024-07-24 18:16:50.660441] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.127 [2024-07-24 18:16:50.660467] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.127 [2024-07-24 18:16:50.660489] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.127 [2024-07-24 18:16:50.660497] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26d18e0 name raid_bdev1, state offline 00:13:42.127 0 00:13:42.127 18:16:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2196536 00:13:42.127 18:16:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2196536 ']' 00:13:42.127 18:16:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2196536 00:13:42.127 18:16:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:42.127 18:16:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.127 18:16:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2196536 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2196536' 00:13:42.386 killing process with pid 2196536 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2196536 00:13:42.386 [2024-07-24 18:16:50.727936] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2196536 00:13:42.386 [2024-07-24 18:16:50.745370] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.acNLadTJw5 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:13:42.386 00:13:42.386 real 0m5.492s 00:13:42.386 user 0m8.385s 00:13:42.386 sys 0m0.989s 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.386 18:16:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.386 ************************************ 00:13:42.386 END TEST raid_write_error_test 00:13:42.386 ************************************ 00:13:42.386 18:16:50 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:42.386 18:16:50 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:13:42.386 18:16:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:42.386 18:16:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.386 18:16:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.645 ************************************ 00:13:42.645 START TEST raid_state_function_test 00:13:42.645 ************************************ 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2197688 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2197688' 00:13:42.645 Process raid pid: 2197688 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2197688 /var/tmp/spdk-raid.sock 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2197688 ']' 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:42.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.645 18:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.645 [2024-07-24 18:16:51.085552] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:13:42.645 [2024-07-24 18:16:51.085596] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:01.0 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:01.1 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:01.2 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:01.3 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:01.4 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:01.5 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:01.6 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:01.7 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:02.0 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:02.1 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:02.2 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:02.3 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:02.4 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:02.5 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:02.6 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b3:02.7 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b5:01.0 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b5:01.1 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b5:01.2 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b5:01.3 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b5:01.4 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b5:01.5 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b5:01.6 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b5:01.7 cannot be used 00:13:42.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.645 EAL: Requested device 0000:b5:02.0 cannot be used 00:13:42.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.646 EAL: Requested device 0000:b5:02.1 cannot be used 00:13:42.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.646 EAL: Requested device 0000:b5:02.2 cannot be used 00:13:42.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.646 EAL: Requested device 0000:b5:02.3 cannot be used 00:13:42.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.646 EAL: Requested device 0000:b5:02.4 cannot be used 00:13:42.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.646 EAL: Requested device 0000:b5:02.5 cannot be used 00:13:42.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.646 EAL: Requested device 0000:b5:02.6 cannot be used 00:13:42.646 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:42.646 EAL: Requested device 0000:b5:02.7 cannot be used 00:13:42.646 [2024-07-24 18:16:51.179395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.904 [2024-07-24 18:16:51.254228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.904 [2024-07-24 18:16:51.306977] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.904 [2024-07-24 18:16:51.307003] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.470 18:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.470 18:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:43.470 18:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:43.470 [2024-07-24 18:16:52.037983] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.470 [2024-07-24 18:16:52.038013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.470 [2024-07-24 18:16:52.038020] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.470 [2024-07-24 18:16:52.038027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.470 [2024-07-24 18:16:52.038032] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:43.470 [2024-07-24 18:16:52.038043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:43.470 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:43.470 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:43.470 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:43.470 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:43.470 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:43.470 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:43.470 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:43.470 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:43.470 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:43.470 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:43.470 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.471 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.728 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:43.728 "name": "Existed_Raid", 00:13:43.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.728 "strip_size_kb": 0, 00:13:43.728 "state": "configuring", 00:13:43.728 "raid_level": "raid1", 00:13:43.728 "superblock": false, 00:13:43.728 "num_base_bdevs": 3, 00:13:43.728 "num_base_bdevs_discovered": 0, 00:13:43.728 "num_base_bdevs_operational": 3, 00:13:43.728 "base_bdevs_list": [ 00:13:43.729 { 00:13:43.729 "name": "BaseBdev1", 00:13:43.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.729 "is_configured": false, 00:13:43.729 "data_offset": 0, 00:13:43.729 "data_size": 0 00:13:43.729 }, 00:13:43.729 { 00:13:43.729 "name": "BaseBdev2", 00:13:43.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.729 "is_configured": false, 00:13:43.729 "data_offset": 0, 00:13:43.729 "data_size": 0 00:13:43.729 }, 00:13:43.729 { 00:13:43.729 "name": "BaseBdev3", 00:13:43.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.729 "is_configured": false, 00:13:43.729 "data_offset": 0, 00:13:43.729 "data_size": 0 00:13:43.729 } 00:13:43.729 ] 00:13:43.729 }' 00:13:43.729 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:43.729 18:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.295 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:44.295 [2024-07-24 18:16:52.843979] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.295 [2024-07-24 18:16:52.843999] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17591c0 name Existed_Raid, state configuring 00:13:44.295 18:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:44.552 [2024-07-24 18:16:53.012421] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.552 [2024-07-24 18:16:53.012438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.552 [2024-07-24 18:16:53.012443] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.552 [2024-07-24 18:16:53.012450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.552 [2024-07-24 18:16:53.012455] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.552 [2024-07-24 18:16:53.012462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.552 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:44.810 [2024-07-24 18:16:53.185289] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.810 BaseBdev1 00:13:44.810 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:44.810 18:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:44.810 18:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:44.810 18:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:44.810 18:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:44.810 18:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:44.810 18:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:44.810 18:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:45.068 [ 00:13:45.068 { 00:13:45.068 "name": "BaseBdev1", 00:13:45.068 "aliases": [ 00:13:45.068 "86a0bf8c-399e-4650-953e-2cbc3c0f9fb8" 00:13:45.068 ], 00:13:45.068 "product_name": "Malloc disk", 00:13:45.068 "block_size": 512, 00:13:45.068 "num_blocks": 65536, 00:13:45.068 "uuid": "86a0bf8c-399e-4650-953e-2cbc3c0f9fb8", 00:13:45.068 "assigned_rate_limits": { 00:13:45.068 "rw_ios_per_sec": 0, 00:13:45.068 "rw_mbytes_per_sec": 0, 00:13:45.068 "r_mbytes_per_sec": 0, 00:13:45.068 "w_mbytes_per_sec": 0 00:13:45.068 }, 00:13:45.068 "claimed": true, 00:13:45.068 "claim_type": "exclusive_write", 00:13:45.068 "zoned": false, 00:13:45.068 "supported_io_types": { 00:13:45.068 "read": true, 00:13:45.068 "write": true, 00:13:45.068 "unmap": true, 00:13:45.068 "flush": true, 00:13:45.068 "reset": true, 00:13:45.068 "nvme_admin": false, 00:13:45.068 "nvme_io": false, 00:13:45.068 "nvme_io_md": false, 00:13:45.068 "write_zeroes": true, 00:13:45.068 "zcopy": true, 00:13:45.068 "get_zone_info": false, 00:13:45.068 "zone_management": false, 00:13:45.068 "zone_append": false, 00:13:45.068 "compare": false, 00:13:45.068 "compare_and_write": false, 00:13:45.068 "abort": true, 00:13:45.068 "seek_hole": false, 00:13:45.068 "seek_data": false, 00:13:45.068 "copy": true, 00:13:45.068 "nvme_iov_md": false 00:13:45.068 }, 00:13:45.068 "memory_domains": [ 00:13:45.068 { 00:13:45.068 "dma_device_id": "system", 00:13:45.068 "dma_device_type": 1 00:13:45.068 }, 00:13:45.068 { 00:13:45.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.068 "dma_device_type": 2 00:13:45.068 } 00:13:45.068 ], 00:13:45.068 "driver_specific": {} 00:13:45.068 } 00:13:45.068 ] 00:13:45.068 18:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:45.068 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:45.068 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:45.068 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:45.068 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:45.068 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:45.068 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:45.068 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:45.068 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:45.068 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:45.068 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:45.068 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.068 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.325 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:45.325 "name": "Existed_Raid", 00:13:45.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.325 "strip_size_kb": 0, 00:13:45.325 "state": "configuring", 00:13:45.325 "raid_level": "raid1", 00:13:45.325 "superblock": false, 00:13:45.325 "num_base_bdevs": 3, 00:13:45.325 "num_base_bdevs_discovered": 1, 00:13:45.325 "num_base_bdevs_operational": 3, 00:13:45.325 "base_bdevs_list": [ 00:13:45.325 { 00:13:45.325 "name": "BaseBdev1", 00:13:45.325 "uuid": "86a0bf8c-399e-4650-953e-2cbc3c0f9fb8", 00:13:45.325 "is_configured": true, 00:13:45.325 "data_offset": 0, 00:13:45.325 "data_size": 65536 00:13:45.325 }, 00:13:45.325 { 00:13:45.325 "name": "BaseBdev2", 00:13:45.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.325 "is_configured": false, 00:13:45.325 "data_offset": 0, 00:13:45.325 "data_size": 0 00:13:45.325 }, 00:13:45.325 { 00:13:45.325 "name": "BaseBdev3", 00:13:45.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.325 "is_configured": false, 00:13:45.325 "data_offset": 0, 00:13:45.325 "data_size": 0 00:13:45.325 } 00:13:45.325 ] 00:13:45.325 }' 00:13:45.325 18:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:45.325 18:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.889 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:45.889 [2024-07-24 18:16:54.360368] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:45.889 [2024-07-24 18:16:54.360395] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1758a90 name Existed_Raid, state configuring 00:13:45.889 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:46.148 [2024-07-24 18:16:54.532827] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.148 [2024-07-24 18:16:54.533828] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.148 [2024-07-24 18:16:54.533852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.148 [2024-07-24 18:16:54.533858] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:46.148 [2024-07-24 18:16:54.533866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:46.148 "name": "Existed_Raid", 00:13:46.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.148 "strip_size_kb": 0, 00:13:46.148 "state": "configuring", 00:13:46.148 "raid_level": "raid1", 00:13:46.148 "superblock": false, 00:13:46.148 "num_base_bdevs": 3, 00:13:46.148 "num_base_bdevs_discovered": 1, 00:13:46.148 "num_base_bdevs_operational": 3, 00:13:46.148 "base_bdevs_list": [ 00:13:46.148 { 00:13:46.148 "name": "BaseBdev1", 00:13:46.148 "uuid": "86a0bf8c-399e-4650-953e-2cbc3c0f9fb8", 00:13:46.148 "is_configured": true, 00:13:46.148 "data_offset": 0, 00:13:46.148 "data_size": 65536 00:13:46.148 }, 00:13:46.148 { 00:13:46.148 "name": "BaseBdev2", 00:13:46.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.148 "is_configured": false, 00:13:46.148 "data_offset": 0, 00:13:46.148 "data_size": 0 00:13:46.148 }, 00:13:46.148 { 00:13:46.148 "name": "BaseBdev3", 00:13:46.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.148 "is_configured": false, 00:13:46.148 "data_offset": 0, 00:13:46.148 "data_size": 0 00:13:46.148 } 00:13:46.148 ] 00:13:46.148 }' 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:46.148 18:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.714 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:46.973 [2024-07-24 18:16:55.345647] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.973 BaseBdev2 00:13:46.973 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:46.973 18:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:46.973 18:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:46.973 18:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:46.973 18:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:46.973 18:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:46.973 18:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:46.973 18:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:47.231 [ 00:13:47.231 { 00:13:47.231 "name": "BaseBdev2", 00:13:47.231 "aliases": [ 00:13:47.231 "2ebb1741-038a-480f-b3eb-a4582c014aef" 00:13:47.231 ], 00:13:47.231 "product_name": "Malloc disk", 00:13:47.231 "block_size": 512, 00:13:47.231 "num_blocks": 65536, 00:13:47.231 "uuid": "2ebb1741-038a-480f-b3eb-a4582c014aef", 00:13:47.231 "assigned_rate_limits": { 00:13:47.231 "rw_ios_per_sec": 0, 00:13:47.231 "rw_mbytes_per_sec": 0, 00:13:47.231 "r_mbytes_per_sec": 0, 00:13:47.231 "w_mbytes_per_sec": 0 00:13:47.231 }, 00:13:47.231 "claimed": true, 00:13:47.231 "claim_type": "exclusive_write", 00:13:47.231 "zoned": false, 00:13:47.231 "supported_io_types": { 00:13:47.231 "read": true, 00:13:47.231 "write": true, 00:13:47.232 "unmap": true, 00:13:47.232 "flush": true, 00:13:47.232 "reset": true, 00:13:47.232 "nvme_admin": false, 00:13:47.232 "nvme_io": false, 00:13:47.232 "nvme_io_md": false, 00:13:47.232 "write_zeroes": true, 00:13:47.232 "zcopy": true, 00:13:47.232 "get_zone_info": false, 00:13:47.232 "zone_management": false, 00:13:47.232 "zone_append": false, 00:13:47.232 "compare": false, 00:13:47.232 "compare_and_write": false, 00:13:47.232 "abort": true, 00:13:47.232 "seek_hole": false, 00:13:47.232 "seek_data": false, 00:13:47.232 "copy": true, 00:13:47.232 "nvme_iov_md": false 00:13:47.232 }, 00:13:47.232 "memory_domains": [ 00:13:47.232 { 00:13:47.232 "dma_device_id": "system", 00:13:47.232 "dma_device_type": 1 00:13:47.232 }, 00:13:47.232 { 00:13:47.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.232 "dma_device_type": 2 00:13:47.232 } 00:13:47.232 ], 00:13:47.232 "driver_specific": {} 00:13:47.232 } 00:13:47.232 ] 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.232 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.490 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:47.490 "name": "Existed_Raid", 00:13:47.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.490 "strip_size_kb": 0, 00:13:47.490 "state": "configuring", 00:13:47.490 "raid_level": "raid1", 00:13:47.490 "superblock": false, 00:13:47.490 "num_base_bdevs": 3, 00:13:47.490 "num_base_bdevs_discovered": 2, 00:13:47.490 "num_base_bdevs_operational": 3, 00:13:47.490 "base_bdevs_list": [ 00:13:47.490 { 00:13:47.490 "name": "BaseBdev1", 00:13:47.490 "uuid": "86a0bf8c-399e-4650-953e-2cbc3c0f9fb8", 00:13:47.490 "is_configured": true, 00:13:47.490 "data_offset": 0, 00:13:47.491 "data_size": 65536 00:13:47.491 }, 00:13:47.491 { 00:13:47.491 "name": "BaseBdev2", 00:13:47.491 "uuid": "2ebb1741-038a-480f-b3eb-a4582c014aef", 00:13:47.491 "is_configured": true, 00:13:47.491 "data_offset": 0, 00:13:47.491 "data_size": 65536 00:13:47.491 }, 00:13:47.491 { 00:13:47.491 "name": "BaseBdev3", 00:13:47.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.491 "is_configured": false, 00:13:47.491 "data_offset": 0, 00:13:47.491 "data_size": 0 00:13:47.491 } 00:13:47.491 ] 00:13:47.491 }' 00:13:47.491 18:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:47.491 18:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.057 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:48.057 [2024-07-24 18:16:56.543458] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.057 [2024-07-24 18:16:56.543490] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1759980 00:13:48.057 [2024-07-24 18:16:56.543496] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:48.057 [2024-07-24 18:16:56.543620] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1759650 00:13:48.057 [2024-07-24 18:16:56.543725] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1759980 00:13:48.057 [2024-07-24 18:16:56.543731] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1759980 00:13:48.057 [2024-07-24 18:16:56.543849] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.057 BaseBdev3 00:13:48.057 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:48.057 18:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:48.057 18:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:48.057 18:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:48.057 18:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:48.057 18:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:48.057 18:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:48.315 18:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:48.315 [ 00:13:48.315 { 00:13:48.315 "name": "BaseBdev3", 00:13:48.315 "aliases": [ 00:13:48.315 "a00853c4-fabf-487a-a931-21f3b7a2f96b" 00:13:48.315 ], 00:13:48.315 "product_name": "Malloc disk", 00:13:48.315 "block_size": 512, 00:13:48.315 "num_blocks": 65536, 00:13:48.315 "uuid": "a00853c4-fabf-487a-a931-21f3b7a2f96b", 00:13:48.315 "assigned_rate_limits": { 00:13:48.315 "rw_ios_per_sec": 0, 00:13:48.315 "rw_mbytes_per_sec": 0, 00:13:48.315 "r_mbytes_per_sec": 0, 00:13:48.315 "w_mbytes_per_sec": 0 00:13:48.315 }, 00:13:48.315 "claimed": true, 00:13:48.315 "claim_type": "exclusive_write", 00:13:48.315 "zoned": false, 00:13:48.315 "supported_io_types": { 00:13:48.315 "read": true, 00:13:48.315 "write": true, 00:13:48.315 "unmap": true, 00:13:48.315 "flush": true, 00:13:48.315 "reset": true, 00:13:48.315 "nvme_admin": false, 00:13:48.315 "nvme_io": false, 00:13:48.315 "nvme_io_md": false, 00:13:48.315 "write_zeroes": true, 00:13:48.315 "zcopy": true, 00:13:48.315 "get_zone_info": false, 00:13:48.315 "zone_management": false, 00:13:48.315 "zone_append": false, 00:13:48.315 "compare": false, 00:13:48.315 "compare_and_write": false, 00:13:48.315 "abort": true, 00:13:48.315 "seek_hole": false, 00:13:48.315 "seek_data": false, 00:13:48.315 "copy": true, 00:13:48.315 "nvme_iov_md": false 00:13:48.315 }, 00:13:48.315 "memory_domains": [ 00:13:48.315 { 00:13:48.315 "dma_device_id": "system", 00:13:48.315 "dma_device_type": 1 00:13:48.315 }, 00:13:48.315 { 00:13:48.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.315 "dma_device_type": 2 00:13:48.315 } 00:13:48.315 ], 00:13:48.315 "driver_specific": {} 00:13:48.315 } 00:13:48.315 ] 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.573 18:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.573 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.573 "name": "Existed_Raid", 00:13:48.573 "uuid": "5f8a8c36-3228-488b-831c-21677c06e1fc", 00:13:48.573 "strip_size_kb": 0, 00:13:48.573 "state": "online", 00:13:48.573 "raid_level": "raid1", 00:13:48.573 "superblock": false, 00:13:48.573 "num_base_bdevs": 3, 00:13:48.573 "num_base_bdevs_discovered": 3, 00:13:48.573 "num_base_bdevs_operational": 3, 00:13:48.573 "base_bdevs_list": [ 00:13:48.573 { 00:13:48.573 "name": "BaseBdev1", 00:13:48.573 "uuid": "86a0bf8c-399e-4650-953e-2cbc3c0f9fb8", 00:13:48.573 "is_configured": true, 00:13:48.573 "data_offset": 0, 00:13:48.573 "data_size": 65536 00:13:48.573 }, 00:13:48.573 { 00:13:48.574 "name": "BaseBdev2", 00:13:48.574 "uuid": "2ebb1741-038a-480f-b3eb-a4582c014aef", 00:13:48.574 "is_configured": true, 00:13:48.574 "data_offset": 0, 00:13:48.574 "data_size": 65536 00:13:48.574 }, 00:13:48.574 { 00:13:48.574 "name": "BaseBdev3", 00:13:48.574 "uuid": "a00853c4-fabf-487a-a931-21f3b7a2f96b", 00:13:48.574 "is_configured": true, 00:13:48.574 "data_offset": 0, 00:13:48.574 "data_size": 65536 00:13:48.574 } 00:13:48.574 ] 00:13:48.574 }' 00:13:48.574 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.574 18:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.140 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:49.140 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:49.140 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:49.140 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:49.140 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:49.140 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:49.140 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:49.140 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:49.398 [2024-07-24 18:16:57.742766] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.398 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:49.398 "name": "Existed_Raid", 00:13:49.398 "aliases": [ 00:13:49.398 "5f8a8c36-3228-488b-831c-21677c06e1fc" 00:13:49.398 ], 00:13:49.398 "product_name": "Raid Volume", 00:13:49.398 "block_size": 512, 00:13:49.398 "num_blocks": 65536, 00:13:49.398 "uuid": "5f8a8c36-3228-488b-831c-21677c06e1fc", 00:13:49.398 "assigned_rate_limits": { 00:13:49.398 "rw_ios_per_sec": 0, 00:13:49.398 "rw_mbytes_per_sec": 0, 00:13:49.398 "r_mbytes_per_sec": 0, 00:13:49.398 "w_mbytes_per_sec": 0 00:13:49.398 }, 00:13:49.398 "claimed": false, 00:13:49.398 "zoned": false, 00:13:49.398 "supported_io_types": { 00:13:49.398 "read": true, 00:13:49.398 "write": true, 00:13:49.398 "unmap": false, 00:13:49.398 "flush": false, 00:13:49.398 "reset": true, 00:13:49.398 "nvme_admin": false, 00:13:49.398 "nvme_io": false, 00:13:49.398 "nvme_io_md": false, 00:13:49.398 "write_zeroes": true, 00:13:49.398 "zcopy": false, 00:13:49.398 "get_zone_info": false, 00:13:49.398 "zone_management": false, 00:13:49.398 "zone_append": false, 00:13:49.398 "compare": false, 00:13:49.398 "compare_and_write": false, 00:13:49.398 "abort": false, 00:13:49.398 "seek_hole": false, 00:13:49.398 "seek_data": false, 00:13:49.398 "copy": false, 00:13:49.398 "nvme_iov_md": false 00:13:49.398 }, 00:13:49.398 "memory_domains": [ 00:13:49.399 { 00:13:49.399 "dma_device_id": "system", 00:13:49.399 "dma_device_type": 1 00:13:49.399 }, 00:13:49.399 { 00:13:49.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.399 "dma_device_type": 2 00:13:49.399 }, 00:13:49.399 { 00:13:49.399 "dma_device_id": "system", 00:13:49.399 "dma_device_type": 1 00:13:49.399 }, 00:13:49.399 { 00:13:49.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.399 "dma_device_type": 2 00:13:49.399 }, 00:13:49.399 { 00:13:49.399 "dma_device_id": "system", 00:13:49.399 "dma_device_type": 1 00:13:49.399 }, 00:13:49.399 { 00:13:49.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.399 "dma_device_type": 2 00:13:49.399 } 00:13:49.399 ], 00:13:49.399 "driver_specific": { 00:13:49.399 "raid": { 00:13:49.399 "uuid": "5f8a8c36-3228-488b-831c-21677c06e1fc", 00:13:49.399 "strip_size_kb": 0, 00:13:49.399 "state": "online", 00:13:49.399 "raid_level": "raid1", 00:13:49.399 "superblock": false, 00:13:49.399 "num_base_bdevs": 3, 00:13:49.399 "num_base_bdevs_discovered": 3, 00:13:49.399 "num_base_bdevs_operational": 3, 00:13:49.399 "base_bdevs_list": [ 00:13:49.399 { 00:13:49.399 "name": "BaseBdev1", 00:13:49.399 "uuid": "86a0bf8c-399e-4650-953e-2cbc3c0f9fb8", 00:13:49.399 "is_configured": true, 00:13:49.399 "data_offset": 0, 00:13:49.399 "data_size": 65536 00:13:49.399 }, 00:13:49.399 { 00:13:49.399 "name": "BaseBdev2", 00:13:49.399 "uuid": "2ebb1741-038a-480f-b3eb-a4582c014aef", 00:13:49.399 "is_configured": true, 00:13:49.399 "data_offset": 0, 00:13:49.399 "data_size": 65536 00:13:49.399 }, 00:13:49.399 { 00:13:49.399 "name": "BaseBdev3", 00:13:49.399 "uuid": "a00853c4-fabf-487a-a931-21f3b7a2f96b", 00:13:49.399 "is_configured": true, 00:13:49.399 "data_offset": 0, 00:13:49.399 "data_size": 65536 00:13:49.399 } 00:13:49.399 ] 00:13:49.399 } 00:13:49.399 } 00:13:49.399 }' 00:13:49.399 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.399 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:49.399 BaseBdev2 00:13:49.399 BaseBdev3' 00:13:49.399 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:49.399 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:49.399 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:49.399 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:49.399 "name": "BaseBdev1", 00:13:49.399 "aliases": [ 00:13:49.399 "86a0bf8c-399e-4650-953e-2cbc3c0f9fb8" 00:13:49.399 ], 00:13:49.399 "product_name": "Malloc disk", 00:13:49.399 "block_size": 512, 00:13:49.399 "num_blocks": 65536, 00:13:49.399 "uuid": "86a0bf8c-399e-4650-953e-2cbc3c0f9fb8", 00:13:49.399 "assigned_rate_limits": { 00:13:49.399 "rw_ios_per_sec": 0, 00:13:49.399 "rw_mbytes_per_sec": 0, 00:13:49.399 "r_mbytes_per_sec": 0, 00:13:49.399 "w_mbytes_per_sec": 0 00:13:49.399 }, 00:13:49.399 "claimed": true, 00:13:49.399 "claim_type": "exclusive_write", 00:13:49.399 "zoned": false, 00:13:49.399 "supported_io_types": { 00:13:49.399 "read": true, 00:13:49.399 "write": true, 00:13:49.399 "unmap": true, 00:13:49.399 "flush": true, 00:13:49.399 "reset": true, 00:13:49.399 "nvme_admin": false, 00:13:49.399 "nvme_io": false, 00:13:49.399 "nvme_io_md": false, 00:13:49.399 "write_zeroes": true, 00:13:49.399 "zcopy": true, 00:13:49.399 "get_zone_info": false, 00:13:49.399 "zone_management": false, 00:13:49.399 "zone_append": false, 00:13:49.399 "compare": false, 00:13:49.399 "compare_and_write": false, 00:13:49.399 "abort": true, 00:13:49.399 "seek_hole": false, 00:13:49.399 "seek_data": false, 00:13:49.399 "copy": true, 00:13:49.399 "nvme_iov_md": false 00:13:49.399 }, 00:13:49.399 "memory_domains": [ 00:13:49.399 { 00:13:49.399 "dma_device_id": "system", 00:13:49.399 "dma_device_type": 1 00:13:49.399 }, 00:13:49.399 { 00:13:49.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.399 "dma_device_type": 2 00:13:49.399 } 00:13:49.399 ], 00:13:49.399 "driver_specific": {} 00:13:49.399 }' 00:13:49.399 18:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.657 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.657 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:49.657 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:49.657 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:49.657 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:49.657 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.657 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.658 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:49.658 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:49.658 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:49.916 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:49.916 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:49.916 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:49.916 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:49.916 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:49.916 "name": "BaseBdev2", 00:13:49.916 "aliases": [ 00:13:49.916 "2ebb1741-038a-480f-b3eb-a4582c014aef" 00:13:49.916 ], 00:13:49.916 "product_name": "Malloc disk", 00:13:49.916 "block_size": 512, 00:13:49.916 "num_blocks": 65536, 00:13:49.916 "uuid": "2ebb1741-038a-480f-b3eb-a4582c014aef", 00:13:49.916 "assigned_rate_limits": { 00:13:49.916 "rw_ios_per_sec": 0, 00:13:49.916 "rw_mbytes_per_sec": 0, 00:13:49.916 "r_mbytes_per_sec": 0, 00:13:49.916 "w_mbytes_per_sec": 0 00:13:49.916 }, 00:13:49.916 "claimed": true, 00:13:49.916 "claim_type": "exclusive_write", 00:13:49.916 "zoned": false, 00:13:49.916 "supported_io_types": { 00:13:49.916 "read": true, 00:13:49.916 "write": true, 00:13:49.916 "unmap": true, 00:13:49.916 "flush": true, 00:13:49.916 "reset": true, 00:13:49.916 "nvme_admin": false, 00:13:49.916 "nvme_io": false, 00:13:49.916 "nvme_io_md": false, 00:13:49.916 "write_zeroes": true, 00:13:49.916 "zcopy": true, 00:13:49.916 "get_zone_info": false, 00:13:49.916 "zone_management": false, 00:13:49.916 "zone_append": false, 00:13:49.916 "compare": false, 00:13:49.916 "compare_and_write": false, 00:13:49.916 "abort": true, 00:13:49.916 "seek_hole": false, 00:13:49.916 "seek_data": false, 00:13:49.916 "copy": true, 00:13:49.916 "nvme_iov_md": false 00:13:49.916 }, 00:13:49.916 "memory_domains": [ 00:13:49.916 { 00:13:49.916 "dma_device_id": "system", 00:13:49.916 "dma_device_type": 1 00:13:49.916 }, 00:13:49.916 { 00:13:49.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.916 "dma_device_type": 2 00:13:49.916 } 00:13:49.916 ], 00:13:49.916 "driver_specific": {} 00:13:49.916 }' 00:13:49.916 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.916 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:50.174 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:50.174 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.174 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.174 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:50.174 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.174 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.174 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:50.174 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.174 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.174 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:50.174 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:50.174 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:50.174 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:50.432 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:50.432 "name": "BaseBdev3", 00:13:50.432 "aliases": [ 00:13:50.432 "a00853c4-fabf-487a-a931-21f3b7a2f96b" 00:13:50.432 ], 00:13:50.432 "product_name": "Malloc disk", 00:13:50.432 "block_size": 512, 00:13:50.432 "num_blocks": 65536, 00:13:50.432 "uuid": "a00853c4-fabf-487a-a931-21f3b7a2f96b", 00:13:50.432 "assigned_rate_limits": { 00:13:50.432 "rw_ios_per_sec": 0, 00:13:50.432 "rw_mbytes_per_sec": 0, 00:13:50.432 "r_mbytes_per_sec": 0, 00:13:50.432 "w_mbytes_per_sec": 0 00:13:50.432 }, 00:13:50.432 "claimed": true, 00:13:50.432 "claim_type": "exclusive_write", 00:13:50.432 "zoned": false, 00:13:50.432 "supported_io_types": { 00:13:50.432 "read": true, 00:13:50.432 "write": true, 00:13:50.432 "unmap": true, 00:13:50.432 "flush": true, 00:13:50.432 "reset": true, 00:13:50.432 "nvme_admin": false, 00:13:50.432 "nvme_io": false, 00:13:50.432 "nvme_io_md": false, 00:13:50.432 "write_zeroes": true, 00:13:50.432 "zcopy": true, 00:13:50.432 "get_zone_info": false, 00:13:50.432 "zone_management": false, 00:13:50.432 "zone_append": false, 00:13:50.432 "compare": false, 00:13:50.432 "compare_and_write": false, 00:13:50.432 "abort": true, 00:13:50.432 "seek_hole": false, 00:13:50.432 "seek_data": false, 00:13:50.432 "copy": true, 00:13:50.432 "nvme_iov_md": false 00:13:50.432 }, 00:13:50.432 "memory_domains": [ 00:13:50.432 { 00:13:50.432 "dma_device_id": "system", 00:13:50.432 "dma_device_type": 1 00:13:50.432 }, 00:13:50.432 { 00:13:50.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.432 "dma_device_type": 2 00:13:50.432 } 00:13:50.432 ], 00:13:50.432 "driver_specific": {} 00:13:50.432 }' 00:13:50.432 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:50.432 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:50.432 18:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:50.432 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.690 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.690 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:50.690 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.690 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.690 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:50.690 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.690 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.690 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:50.690 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:50.948 [2024-07-24 18:16:59.374938] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.948 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.206 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:51.206 "name": "Existed_Raid", 00:13:51.206 "uuid": "5f8a8c36-3228-488b-831c-21677c06e1fc", 00:13:51.206 "strip_size_kb": 0, 00:13:51.206 "state": "online", 00:13:51.206 "raid_level": "raid1", 00:13:51.206 "superblock": false, 00:13:51.206 "num_base_bdevs": 3, 00:13:51.206 "num_base_bdevs_discovered": 2, 00:13:51.206 "num_base_bdevs_operational": 2, 00:13:51.206 "base_bdevs_list": [ 00:13:51.206 { 00:13:51.206 "name": null, 00:13:51.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.206 "is_configured": false, 00:13:51.206 "data_offset": 0, 00:13:51.206 "data_size": 65536 00:13:51.206 }, 00:13:51.206 { 00:13:51.206 "name": "BaseBdev2", 00:13:51.206 "uuid": "2ebb1741-038a-480f-b3eb-a4582c014aef", 00:13:51.206 "is_configured": true, 00:13:51.206 "data_offset": 0, 00:13:51.206 "data_size": 65536 00:13:51.206 }, 00:13:51.206 { 00:13:51.206 "name": "BaseBdev3", 00:13:51.206 "uuid": "a00853c4-fabf-487a-a931-21f3b7a2f96b", 00:13:51.206 "is_configured": true, 00:13:51.206 "data_offset": 0, 00:13:51.206 "data_size": 65536 00:13:51.206 } 00:13:51.206 ] 00:13:51.206 }' 00:13:51.206 18:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:51.206 18:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.773 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:51.773 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:51.773 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.773 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:51.773 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:51.773 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:51.773 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:52.031 [2024-07-24 18:17:00.398399] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:52.031 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:52.031 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:52.031 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.031 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:52.031 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:52.031 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:52.031 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:52.290 [2024-07-24 18:17:00.728861] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:52.290 [2024-07-24 18:17:00.728920] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.290 [2024-07-24 18:17:00.738456] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.290 [2024-07-24 18:17:00.738479] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.290 [2024-07-24 18:17:00.738485] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1759980 name Existed_Raid, state offline 00:13:52.290 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:52.290 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:52.290 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:52.290 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.548 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:52.548 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:52.548 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:52.548 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:52.548 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:52.548 18:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:52.548 BaseBdev2 00:13:52.548 18:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:52.548 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:52.548 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:52.548 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:52.548 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:52.548 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:52.548 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:52.806 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:53.065 [ 00:13:53.065 { 00:13:53.065 "name": "BaseBdev2", 00:13:53.065 "aliases": [ 00:13:53.065 "439728c6-216d-485f-9dc1-3b9de9bef00d" 00:13:53.065 ], 00:13:53.065 "product_name": "Malloc disk", 00:13:53.065 "block_size": 512, 00:13:53.065 "num_blocks": 65536, 00:13:53.065 "uuid": "439728c6-216d-485f-9dc1-3b9de9bef00d", 00:13:53.065 "assigned_rate_limits": { 00:13:53.065 "rw_ios_per_sec": 0, 00:13:53.065 "rw_mbytes_per_sec": 0, 00:13:53.065 "r_mbytes_per_sec": 0, 00:13:53.065 "w_mbytes_per_sec": 0 00:13:53.065 }, 00:13:53.065 "claimed": false, 00:13:53.065 "zoned": false, 00:13:53.065 "supported_io_types": { 00:13:53.065 "read": true, 00:13:53.065 "write": true, 00:13:53.065 "unmap": true, 00:13:53.065 "flush": true, 00:13:53.065 "reset": true, 00:13:53.065 "nvme_admin": false, 00:13:53.065 "nvme_io": false, 00:13:53.065 "nvme_io_md": false, 00:13:53.065 "write_zeroes": true, 00:13:53.065 "zcopy": true, 00:13:53.065 "get_zone_info": false, 00:13:53.065 "zone_management": false, 00:13:53.065 "zone_append": false, 00:13:53.065 "compare": false, 00:13:53.065 "compare_and_write": false, 00:13:53.065 "abort": true, 00:13:53.065 "seek_hole": false, 00:13:53.065 "seek_data": false, 00:13:53.065 "copy": true, 00:13:53.065 "nvme_iov_md": false 00:13:53.065 }, 00:13:53.065 "memory_domains": [ 00:13:53.065 { 00:13:53.065 "dma_device_id": "system", 00:13:53.065 "dma_device_type": 1 00:13:53.065 }, 00:13:53.065 { 00:13:53.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.065 "dma_device_type": 2 00:13:53.065 } 00:13:53.065 ], 00:13:53.065 "driver_specific": {} 00:13:53.065 } 00:13:53.065 ] 00:13:53.065 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:53.065 18:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:53.065 18:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:53.065 18:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:53.065 BaseBdev3 00:13:53.065 18:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:53.065 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:53.065 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:53.065 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:53.065 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:53.065 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:53.065 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:53.324 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:53.324 [ 00:13:53.324 { 00:13:53.324 "name": "BaseBdev3", 00:13:53.324 "aliases": [ 00:13:53.324 "99f0c36e-1d95-44fc-92cf-f2d65b19e2d2" 00:13:53.324 ], 00:13:53.324 "product_name": "Malloc disk", 00:13:53.324 "block_size": 512, 00:13:53.324 "num_blocks": 65536, 00:13:53.324 "uuid": "99f0c36e-1d95-44fc-92cf-f2d65b19e2d2", 00:13:53.324 "assigned_rate_limits": { 00:13:53.324 "rw_ios_per_sec": 0, 00:13:53.324 "rw_mbytes_per_sec": 0, 00:13:53.324 "r_mbytes_per_sec": 0, 00:13:53.324 "w_mbytes_per_sec": 0 00:13:53.324 }, 00:13:53.324 "claimed": false, 00:13:53.324 "zoned": false, 00:13:53.324 "supported_io_types": { 00:13:53.324 "read": true, 00:13:53.324 "write": true, 00:13:53.324 "unmap": true, 00:13:53.324 "flush": true, 00:13:53.324 "reset": true, 00:13:53.324 "nvme_admin": false, 00:13:53.324 "nvme_io": false, 00:13:53.324 "nvme_io_md": false, 00:13:53.324 "write_zeroes": true, 00:13:53.324 "zcopy": true, 00:13:53.324 "get_zone_info": false, 00:13:53.324 "zone_management": false, 00:13:53.324 "zone_append": false, 00:13:53.324 "compare": false, 00:13:53.324 "compare_and_write": false, 00:13:53.324 "abort": true, 00:13:53.324 "seek_hole": false, 00:13:53.324 "seek_data": false, 00:13:53.324 "copy": true, 00:13:53.324 "nvme_iov_md": false 00:13:53.324 }, 00:13:53.324 "memory_domains": [ 00:13:53.324 { 00:13:53.324 "dma_device_id": "system", 00:13:53.324 "dma_device_type": 1 00:13:53.324 }, 00:13:53.324 { 00:13:53.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.324 "dma_device_type": 2 00:13:53.324 } 00:13:53.324 ], 00:13:53.324 "driver_specific": {} 00:13:53.324 } 00:13:53.324 ] 00:13:53.583 18:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:53.583 18:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:53.583 18:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:53.583 18:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:53.583 [2024-07-24 18:17:02.069540] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:53.583 [2024-07-24 18:17:02.069571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:53.583 [2024-07-24 18:17:02.069583] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:53.583 [2024-07-24 18:17:02.070471] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.583 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:53.583 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:53.583 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:53.583 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:53.583 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:53.583 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:53.583 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:53.583 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:53.583 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:53.583 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:53.583 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.583 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.841 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:53.841 "name": "Existed_Raid", 00:13:53.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.841 "strip_size_kb": 0, 00:13:53.841 "state": "configuring", 00:13:53.841 "raid_level": "raid1", 00:13:53.841 "superblock": false, 00:13:53.841 "num_base_bdevs": 3, 00:13:53.841 "num_base_bdevs_discovered": 2, 00:13:53.841 "num_base_bdevs_operational": 3, 00:13:53.841 "base_bdevs_list": [ 00:13:53.841 { 00:13:53.841 "name": "BaseBdev1", 00:13:53.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.841 "is_configured": false, 00:13:53.841 "data_offset": 0, 00:13:53.841 "data_size": 0 00:13:53.841 }, 00:13:53.841 { 00:13:53.841 "name": "BaseBdev2", 00:13:53.841 "uuid": "439728c6-216d-485f-9dc1-3b9de9bef00d", 00:13:53.841 "is_configured": true, 00:13:53.841 "data_offset": 0, 00:13:53.841 "data_size": 65536 00:13:53.841 }, 00:13:53.841 { 00:13:53.841 "name": "BaseBdev3", 00:13:53.841 "uuid": "99f0c36e-1d95-44fc-92cf-f2d65b19e2d2", 00:13:53.841 "is_configured": true, 00:13:53.841 "data_offset": 0, 00:13:53.841 "data_size": 65536 00:13:53.841 } 00:13:53.841 ] 00:13:53.841 }' 00:13:53.841 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:53.841 18:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.405 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:54.405 [2024-07-24 18:17:02.891648] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:54.405 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:54.405 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:54.405 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:54.405 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:54.405 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:54.405 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:54.405 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:54.405 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:54.405 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:54.405 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:54.405 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.405 18:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.665 18:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:54.665 "name": "Existed_Raid", 00:13:54.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.665 "strip_size_kb": 0, 00:13:54.665 "state": "configuring", 00:13:54.665 "raid_level": "raid1", 00:13:54.665 "superblock": false, 00:13:54.665 "num_base_bdevs": 3, 00:13:54.665 "num_base_bdevs_discovered": 1, 00:13:54.665 "num_base_bdevs_operational": 3, 00:13:54.665 "base_bdevs_list": [ 00:13:54.665 { 00:13:54.665 "name": "BaseBdev1", 00:13:54.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.665 "is_configured": false, 00:13:54.665 "data_offset": 0, 00:13:54.665 "data_size": 0 00:13:54.665 }, 00:13:54.665 { 00:13:54.665 "name": null, 00:13:54.665 "uuid": "439728c6-216d-485f-9dc1-3b9de9bef00d", 00:13:54.665 "is_configured": false, 00:13:54.665 "data_offset": 0, 00:13:54.665 "data_size": 65536 00:13:54.665 }, 00:13:54.665 { 00:13:54.665 "name": "BaseBdev3", 00:13:54.665 "uuid": "99f0c36e-1d95-44fc-92cf-f2d65b19e2d2", 00:13:54.665 "is_configured": true, 00:13:54.665 "data_offset": 0, 00:13:54.665 "data_size": 65536 00:13:54.665 } 00:13:54.665 ] 00:13:54.665 }' 00:13:54.665 18:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:54.665 18:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.274 18:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.274 18:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:55.274 18:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:55.274 18:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:55.533 [2024-07-24 18:17:03.892953] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.533 BaseBdev1 00:13:55.533 18:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:55.533 18:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:55.533 18:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.533 18:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:55.533 18:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.533 18:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.533 18:17:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:55.533 18:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:55.791 [ 00:13:55.791 { 00:13:55.791 "name": "BaseBdev1", 00:13:55.791 "aliases": [ 00:13:55.791 "f39cd146-85f1-4442-a24d-7cd4ef41d700" 00:13:55.791 ], 00:13:55.791 "product_name": "Malloc disk", 00:13:55.791 "block_size": 512, 00:13:55.791 "num_blocks": 65536, 00:13:55.791 "uuid": "f39cd146-85f1-4442-a24d-7cd4ef41d700", 00:13:55.791 "assigned_rate_limits": { 00:13:55.791 "rw_ios_per_sec": 0, 00:13:55.791 "rw_mbytes_per_sec": 0, 00:13:55.791 "r_mbytes_per_sec": 0, 00:13:55.791 "w_mbytes_per_sec": 0 00:13:55.791 }, 00:13:55.791 "claimed": true, 00:13:55.791 "claim_type": "exclusive_write", 00:13:55.791 "zoned": false, 00:13:55.791 "supported_io_types": { 00:13:55.791 "read": true, 00:13:55.791 "write": true, 00:13:55.791 "unmap": true, 00:13:55.793 "flush": true, 00:13:55.793 "reset": true, 00:13:55.793 "nvme_admin": false, 00:13:55.793 "nvme_io": false, 00:13:55.793 "nvme_io_md": false, 00:13:55.793 "write_zeroes": true, 00:13:55.793 "zcopy": true, 00:13:55.793 "get_zone_info": false, 00:13:55.793 "zone_management": false, 00:13:55.793 "zone_append": false, 00:13:55.793 "compare": false, 00:13:55.793 "compare_and_write": false, 00:13:55.793 "abort": true, 00:13:55.793 "seek_hole": false, 00:13:55.793 "seek_data": false, 00:13:55.793 "copy": true, 00:13:55.793 "nvme_iov_md": false 00:13:55.793 }, 00:13:55.793 "memory_domains": [ 00:13:55.793 { 00:13:55.793 "dma_device_id": "system", 00:13:55.793 "dma_device_type": 1 00:13:55.793 }, 00:13:55.793 { 00:13:55.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.793 "dma_device_type": 2 00:13:55.793 } 00:13:55.793 ], 00:13:55.793 "driver_specific": {} 00:13:55.793 } 00:13:55.793 ] 00:13:55.793 18:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:55.793 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:55.793 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:55.793 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:55.793 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:55.793 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:55.793 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:55.793 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:55.793 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:55.793 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:55.793 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:55.793 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.793 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.052 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.052 "name": "Existed_Raid", 00:13:56.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.052 "strip_size_kb": 0, 00:13:56.052 "state": "configuring", 00:13:56.052 "raid_level": "raid1", 00:13:56.052 "superblock": false, 00:13:56.052 "num_base_bdevs": 3, 00:13:56.052 "num_base_bdevs_discovered": 2, 00:13:56.052 "num_base_bdevs_operational": 3, 00:13:56.052 "base_bdevs_list": [ 00:13:56.052 { 00:13:56.052 "name": "BaseBdev1", 00:13:56.052 "uuid": "f39cd146-85f1-4442-a24d-7cd4ef41d700", 00:13:56.052 "is_configured": true, 00:13:56.052 "data_offset": 0, 00:13:56.052 "data_size": 65536 00:13:56.052 }, 00:13:56.052 { 00:13:56.052 "name": null, 00:13:56.052 "uuid": "439728c6-216d-485f-9dc1-3b9de9bef00d", 00:13:56.052 "is_configured": false, 00:13:56.052 "data_offset": 0, 00:13:56.052 "data_size": 65536 00:13:56.052 }, 00:13:56.052 { 00:13:56.052 "name": "BaseBdev3", 00:13:56.052 "uuid": "99f0c36e-1d95-44fc-92cf-f2d65b19e2d2", 00:13:56.052 "is_configured": true, 00:13:56.052 "data_offset": 0, 00:13:56.052 "data_size": 65536 00:13:56.052 } 00:13:56.052 ] 00:13:56.052 }' 00:13:56.052 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.052 18:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.619 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.619 18:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:56.619 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:56.619 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:56.878 [2024-07-24 18:17:05.236433] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.878 "name": "Existed_Raid", 00:13:56.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.878 "strip_size_kb": 0, 00:13:56.878 "state": "configuring", 00:13:56.878 "raid_level": "raid1", 00:13:56.878 "superblock": false, 00:13:56.878 "num_base_bdevs": 3, 00:13:56.878 "num_base_bdevs_discovered": 1, 00:13:56.878 "num_base_bdevs_operational": 3, 00:13:56.878 "base_bdevs_list": [ 00:13:56.878 { 00:13:56.878 "name": "BaseBdev1", 00:13:56.878 "uuid": "f39cd146-85f1-4442-a24d-7cd4ef41d700", 00:13:56.878 "is_configured": true, 00:13:56.878 "data_offset": 0, 00:13:56.878 "data_size": 65536 00:13:56.878 }, 00:13:56.878 { 00:13:56.878 "name": null, 00:13:56.878 "uuid": "439728c6-216d-485f-9dc1-3b9de9bef00d", 00:13:56.878 "is_configured": false, 00:13:56.878 "data_offset": 0, 00:13:56.878 "data_size": 65536 00:13:56.878 }, 00:13:56.878 { 00:13:56.878 "name": null, 00:13:56.878 "uuid": "99f0c36e-1d95-44fc-92cf-f2d65b19e2d2", 00:13:56.878 "is_configured": false, 00:13:56.878 "data_offset": 0, 00:13:56.878 "data_size": 65536 00:13:56.878 } 00:13:56.878 ] 00:13:56.878 }' 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.878 18:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.445 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:57.445 18:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.703 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:57.704 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:57.704 [2024-07-24 18:17:06.239027] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.704 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:57.704 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:57.704 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:57.704 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:57.704 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:57.704 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:57.704 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:57.704 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:57.704 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:57.704 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:57.704 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.704 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.962 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:57.962 "name": "Existed_Raid", 00:13:57.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.962 "strip_size_kb": 0, 00:13:57.962 "state": "configuring", 00:13:57.962 "raid_level": "raid1", 00:13:57.962 "superblock": false, 00:13:57.962 "num_base_bdevs": 3, 00:13:57.962 "num_base_bdevs_discovered": 2, 00:13:57.962 "num_base_bdevs_operational": 3, 00:13:57.962 "base_bdevs_list": [ 00:13:57.962 { 00:13:57.962 "name": "BaseBdev1", 00:13:57.962 "uuid": "f39cd146-85f1-4442-a24d-7cd4ef41d700", 00:13:57.962 "is_configured": true, 00:13:57.962 "data_offset": 0, 00:13:57.962 "data_size": 65536 00:13:57.962 }, 00:13:57.962 { 00:13:57.962 "name": null, 00:13:57.962 "uuid": "439728c6-216d-485f-9dc1-3b9de9bef00d", 00:13:57.962 "is_configured": false, 00:13:57.962 "data_offset": 0, 00:13:57.962 "data_size": 65536 00:13:57.962 }, 00:13:57.962 { 00:13:57.962 "name": "BaseBdev3", 00:13:57.962 "uuid": "99f0c36e-1d95-44fc-92cf-f2d65b19e2d2", 00:13:57.962 "is_configured": true, 00:13:57.962 "data_offset": 0, 00:13:57.962 "data_size": 65536 00:13:57.962 } 00:13:57.962 ] 00:13:57.962 }' 00:13:57.962 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:57.962 18:17:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.528 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.528 18:17:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:58.528 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:58.528 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:58.786 [2024-07-24 18:17:07.237622] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.786 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:58.786 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:58.786 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:58.786 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:58.786 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:58.786 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:58.786 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.786 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.786 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.786 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.786 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.786 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.044 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:59.044 "name": "Existed_Raid", 00:13:59.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.044 "strip_size_kb": 0, 00:13:59.044 "state": "configuring", 00:13:59.044 "raid_level": "raid1", 00:13:59.044 "superblock": false, 00:13:59.044 "num_base_bdevs": 3, 00:13:59.044 "num_base_bdevs_discovered": 1, 00:13:59.044 "num_base_bdevs_operational": 3, 00:13:59.044 "base_bdevs_list": [ 00:13:59.044 { 00:13:59.044 "name": null, 00:13:59.044 "uuid": "f39cd146-85f1-4442-a24d-7cd4ef41d700", 00:13:59.044 "is_configured": false, 00:13:59.044 "data_offset": 0, 00:13:59.044 "data_size": 65536 00:13:59.044 }, 00:13:59.044 { 00:13:59.044 "name": null, 00:13:59.044 "uuid": "439728c6-216d-485f-9dc1-3b9de9bef00d", 00:13:59.044 "is_configured": false, 00:13:59.044 "data_offset": 0, 00:13:59.044 "data_size": 65536 00:13:59.044 }, 00:13:59.044 { 00:13:59.044 "name": "BaseBdev3", 00:13:59.044 "uuid": "99f0c36e-1d95-44fc-92cf-f2d65b19e2d2", 00:13:59.044 "is_configured": true, 00:13:59.044 "data_offset": 0, 00:13:59.044 "data_size": 65536 00:13:59.044 } 00:13:59.044 ] 00:13:59.044 }' 00:13:59.044 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:59.044 18:17:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.611 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.611 18:17:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:59.611 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:59.611 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:59.870 [2024-07-24 18:17:08.245736] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:59.870 "name": "Existed_Raid", 00:13:59.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.870 "strip_size_kb": 0, 00:13:59.870 "state": "configuring", 00:13:59.870 "raid_level": "raid1", 00:13:59.870 "superblock": false, 00:13:59.870 "num_base_bdevs": 3, 00:13:59.870 "num_base_bdevs_discovered": 2, 00:13:59.870 "num_base_bdevs_operational": 3, 00:13:59.870 "base_bdevs_list": [ 00:13:59.870 { 00:13:59.870 "name": null, 00:13:59.870 "uuid": "f39cd146-85f1-4442-a24d-7cd4ef41d700", 00:13:59.870 "is_configured": false, 00:13:59.870 "data_offset": 0, 00:13:59.870 "data_size": 65536 00:13:59.870 }, 00:13:59.870 { 00:13:59.870 "name": "BaseBdev2", 00:13:59.870 "uuid": "439728c6-216d-485f-9dc1-3b9de9bef00d", 00:13:59.870 "is_configured": true, 00:13:59.870 "data_offset": 0, 00:13:59.870 "data_size": 65536 00:13:59.870 }, 00:13:59.870 { 00:13:59.870 "name": "BaseBdev3", 00:13:59.870 "uuid": "99f0c36e-1d95-44fc-92cf-f2d65b19e2d2", 00:13:59.870 "is_configured": true, 00:13:59.870 "data_offset": 0, 00:13:59.870 "data_size": 65536 00:13:59.870 } 00:13:59.870 ] 00:13:59.870 }' 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:59.870 18:17:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.434 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:00.434 18:17:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.692 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:00.692 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:00.692 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.692 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f39cd146-85f1-4442-a24d-7cd4ef41d700 00:14:00.951 [2024-07-24 18:17:09.435602] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:00.951 [2024-07-24 18:17:09.435638] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x175b120 00:14:00.951 [2024-07-24 18:17:09.435643] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:00.951 [2024-07-24 18:17:09.435771] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18ffc90 00:14:00.951 [2024-07-24 18:17:09.435857] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x175b120 00:14:00.951 [2024-07-24 18:17:09.435863] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x175b120 00:14:00.951 [2024-07-24 18:17:09.435993] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.951 NewBaseBdev 00:14:00.951 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:00.951 18:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:00.951 18:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:00.951 18:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:00.951 18:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:00.951 18:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:00.951 18:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:01.209 [ 00:14:01.209 { 00:14:01.209 "name": "NewBaseBdev", 00:14:01.209 "aliases": [ 00:14:01.209 "f39cd146-85f1-4442-a24d-7cd4ef41d700" 00:14:01.209 ], 00:14:01.209 "product_name": "Malloc disk", 00:14:01.209 "block_size": 512, 00:14:01.209 "num_blocks": 65536, 00:14:01.209 "uuid": "f39cd146-85f1-4442-a24d-7cd4ef41d700", 00:14:01.209 "assigned_rate_limits": { 00:14:01.209 "rw_ios_per_sec": 0, 00:14:01.209 "rw_mbytes_per_sec": 0, 00:14:01.209 "r_mbytes_per_sec": 0, 00:14:01.209 "w_mbytes_per_sec": 0 00:14:01.209 }, 00:14:01.209 "claimed": true, 00:14:01.209 "claim_type": "exclusive_write", 00:14:01.209 "zoned": false, 00:14:01.209 "supported_io_types": { 00:14:01.209 "read": true, 00:14:01.209 "write": true, 00:14:01.209 "unmap": true, 00:14:01.209 "flush": true, 00:14:01.209 "reset": true, 00:14:01.209 "nvme_admin": false, 00:14:01.209 "nvme_io": false, 00:14:01.209 "nvme_io_md": false, 00:14:01.209 "write_zeroes": true, 00:14:01.209 "zcopy": true, 00:14:01.209 "get_zone_info": false, 00:14:01.209 "zone_management": false, 00:14:01.209 "zone_append": false, 00:14:01.209 "compare": false, 00:14:01.209 "compare_and_write": false, 00:14:01.209 "abort": true, 00:14:01.209 "seek_hole": false, 00:14:01.209 "seek_data": false, 00:14:01.209 "copy": true, 00:14:01.209 "nvme_iov_md": false 00:14:01.209 }, 00:14:01.209 "memory_domains": [ 00:14:01.209 { 00:14:01.209 "dma_device_id": "system", 00:14:01.209 "dma_device_type": 1 00:14:01.209 }, 00:14:01.209 { 00:14:01.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.209 "dma_device_type": 2 00:14:01.209 } 00:14:01.209 ], 00:14:01.209 "driver_specific": {} 00:14:01.209 } 00:14:01.209 ] 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.209 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.467 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.467 "name": "Existed_Raid", 00:14:01.467 "uuid": "7241d3ef-1b06-4516-9263-ed482cf863dd", 00:14:01.467 "strip_size_kb": 0, 00:14:01.467 "state": "online", 00:14:01.467 "raid_level": "raid1", 00:14:01.467 "superblock": false, 00:14:01.467 "num_base_bdevs": 3, 00:14:01.467 "num_base_bdevs_discovered": 3, 00:14:01.467 "num_base_bdevs_operational": 3, 00:14:01.467 "base_bdevs_list": [ 00:14:01.467 { 00:14:01.467 "name": "NewBaseBdev", 00:14:01.467 "uuid": "f39cd146-85f1-4442-a24d-7cd4ef41d700", 00:14:01.467 "is_configured": true, 00:14:01.467 "data_offset": 0, 00:14:01.467 "data_size": 65536 00:14:01.467 }, 00:14:01.467 { 00:14:01.467 "name": "BaseBdev2", 00:14:01.467 "uuid": "439728c6-216d-485f-9dc1-3b9de9bef00d", 00:14:01.467 "is_configured": true, 00:14:01.467 "data_offset": 0, 00:14:01.467 "data_size": 65536 00:14:01.467 }, 00:14:01.467 { 00:14:01.467 "name": "BaseBdev3", 00:14:01.467 "uuid": "99f0c36e-1d95-44fc-92cf-f2d65b19e2d2", 00:14:01.467 "is_configured": true, 00:14:01.467 "data_offset": 0, 00:14:01.467 "data_size": 65536 00:14:01.467 } 00:14:01.467 ] 00:14:01.467 }' 00:14:01.467 18:17:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.467 18:17:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.034 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:02.034 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:02.034 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:02.034 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:02.034 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:02.034 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:02.034 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:02.034 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:02.034 [2024-07-24 18:17:10.582772] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.034 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:02.034 "name": "Existed_Raid", 00:14:02.034 "aliases": [ 00:14:02.034 "7241d3ef-1b06-4516-9263-ed482cf863dd" 00:14:02.034 ], 00:14:02.034 "product_name": "Raid Volume", 00:14:02.034 "block_size": 512, 00:14:02.034 "num_blocks": 65536, 00:14:02.034 "uuid": "7241d3ef-1b06-4516-9263-ed482cf863dd", 00:14:02.034 "assigned_rate_limits": { 00:14:02.034 "rw_ios_per_sec": 0, 00:14:02.034 "rw_mbytes_per_sec": 0, 00:14:02.034 "r_mbytes_per_sec": 0, 00:14:02.034 "w_mbytes_per_sec": 0 00:14:02.034 }, 00:14:02.034 "claimed": false, 00:14:02.034 "zoned": false, 00:14:02.034 "supported_io_types": { 00:14:02.034 "read": true, 00:14:02.034 "write": true, 00:14:02.034 "unmap": false, 00:14:02.034 "flush": false, 00:14:02.034 "reset": true, 00:14:02.034 "nvme_admin": false, 00:14:02.034 "nvme_io": false, 00:14:02.034 "nvme_io_md": false, 00:14:02.034 "write_zeroes": true, 00:14:02.034 "zcopy": false, 00:14:02.034 "get_zone_info": false, 00:14:02.034 "zone_management": false, 00:14:02.034 "zone_append": false, 00:14:02.034 "compare": false, 00:14:02.034 "compare_and_write": false, 00:14:02.034 "abort": false, 00:14:02.034 "seek_hole": false, 00:14:02.034 "seek_data": false, 00:14:02.034 "copy": false, 00:14:02.034 "nvme_iov_md": false 00:14:02.034 }, 00:14:02.034 "memory_domains": [ 00:14:02.034 { 00:14:02.034 "dma_device_id": "system", 00:14:02.034 "dma_device_type": 1 00:14:02.034 }, 00:14:02.034 { 00:14:02.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.034 "dma_device_type": 2 00:14:02.034 }, 00:14:02.034 { 00:14:02.034 "dma_device_id": "system", 00:14:02.034 "dma_device_type": 1 00:14:02.034 }, 00:14:02.034 { 00:14:02.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.034 "dma_device_type": 2 00:14:02.034 }, 00:14:02.034 { 00:14:02.034 "dma_device_id": "system", 00:14:02.034 "dma_device_type": 1 00:14:02.034 }, 00:14:02.034 { 00:14:02.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.034 "dma_device_type": 2 00:14:02.034 } 00:14:02.034 ], 00:14:02.034 "driver_specific": { 00:14:02.034 "raid": { 00:14:02.034 "uuid": "7241d3ef-1b06-4516-9263-ed482cf863dd", 00:14:02.034 "strip_size_kb": 0, 00:14:02.034 "state": "online", 00:14:02.034 "raid_level": "raid1", 00:14:02.034 "superblock": false, 00:14:02.034 "num_base_bdevs": 3, 00:14:02.034 "num_base_bdevs_discovered": 3, 00:14:02.034 "num_base_bdevs_operational": 3, 00:14:02.034 "base_bdevs_list": [ 00:14:02.034 { 00:14:02.034 "name": "NewBaseBdev", 00:14:02.034 "uuid": "f39cd146-85f1-4442-a24d-7cd4ef41d700", 00:14:02.034 "is_configured": true, 00:14:02.034 "data_offset": 0, 00:14:02.034 "data_size": 65536 00:14:02.034 }, 00:14:02.034 { 00:14:02.034 "name": "BaseBdev2", 00:14:02.034 "uuid": "439728c6-216d-485f-9dc1-3b9de9bef00d", 00:14:02.034 "is_configured": true, 00:14:02.034 "data_offset": 0, 00:14:02.034 "data_size": 65536 00:14:02.034 }, 00:14:02.035 { 00:14:02.035 "name": "BaseBdev3", 00:14:02.035 "uuid": "99f0c36e-1d95-44fc-92cf-f2d65b19e2d2", 00:14:02.035 "is_configured": true, 00:14:02.035 "data_offset": 0, 00:14:02.035 "data_size": 65536 00:14:02.035 } 00:14:02.035 ] 00:14:02.035 } 00:14:02.035 } 00:14:02.035 }' 00:14:02.035 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.293 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:02.293 BaseBdev2 00:14:02.293 BaseBdev3' 00:14:02.293 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.293 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:02.293 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:02.293 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:02.293 "name": "NewBaseBdev", 00:14:02.293 "aliases": [ 00:14:02.293 "f39cd146-85f1-4442-a24d-7cd4ef41d700" 00:14:02.293 ], 00:14:02.293 "product_name": "Malloc disk", 00:14:02.293 "block_size": 512, 00:14:02.293 "num_blocks": 65536, 00:14:02.293 "uuid": "f39cd146-85f1-4442-a24d-7cd4ef41d700", 00:14:02.293 "assigned_rate_limits": { 00:14:02.293 "rw_ios_per_sec": 0, 00:14:02.293 "rw_mbytes_per_sec": 0, 00:14:02.293 "r_mbytes_per_sec": 0, 00:14:02.293 "w_mbytes_per_sec": 0 00:14:02.293 }, 00:14:02.293 "claimed": true, 00:14:02.293 "claim_type": "exclusive_write", 00:14:02.293 "zoned": false, 00:14:02.293 "supported_io_types": { 00:14:02.293 "read": true, 00:14:02.293 "write": true, 00:14:02.293 "unmap": true, 00:14:02.293 "flush": true, 00:14:02.293 "reset": true, 00:14:02.293 "nvme_admin": false, 00:14:02.293 "nvme_io": false, 00:14:02.293 "nvme_io_md": false, 00:14:02.293 "write_zeroes": true, 00:14:02.293 "zcopy": true, 00:14:02.293 "get_zone_info": false, 00:14:02.293 "zone_management": false, 00:14:02.293 "zone_append": false, 00:14:02.293 "compare": false, 00:14:02.293 "compare_and_write": false, 00:14:02.293 "abort": true, 00:14:02.293 "seek_hole": false, 00:14:02.293 "seek_data": false, 00:14:02.293 "copy": true, 00:14:02.293 "nvme_iov_md": false 00:14:02.293 }, 00:14:02.293 "memory_domains": [ 00:14:02.293 { 00:14:02.293 "dma_device_id": "system", 00:14:02.293 "dma_device_type": 1 00:14:02.293 }, 00:14:02.293 { 00:14:02.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.293 "dma_device_type": 2 00:14:02.293 } 00:14:02.293 ], 00:14:02.293 "driver_specific": {} 00:14:02.293 }' 00:14:02.293 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.293 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.552 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:02.552 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.552 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.552 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:02.552 18:17:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.552 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.552 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:02.552 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.552 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.552 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:02.552 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.552 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:02.552 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:02.812 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:02.812 "name": "BaseBdev2", 00:14:02.812 "aliases": [ 00:14:02.812 "439728c6-216d-485f-9dc1-3b9de9bef00d" 00:14:02.812 ], 00:14:02.812 "product_name": "Malloc disk", 00:14:02.812 "block_size": 512, 00:14:02.812 "num_blocks": 65536, 00:14:02.812 "uuid": "439728c6-216d-485f-9dc1-3b9de9bef00d", 00:14:02.812 "assigned_rate_limits": { 00:14:02.812 "rw_ios_per_sec": 0, 00:14:02.812 "rw_mbytes_per_sec": 0, 00:14:02.812 "r_mbytes_per_sec": 0, 00:14:02.812 "w_mbytes_per_sec": 0 00:14:02.812 }, 00:14:02.812 "claimed": true, 00:14:02.812 "claim_type": "exclusive_write", 00:14:02.812 "zoned": false, 00:14:02.812 "supported_io_types": { 00:14:02.812 "read": true, 00:14:02.812 "write": true, 00:14:02.812 "unmap": true, 00:14:02.812 "flush": true, 00:14:02.812 "reset": true, 00:14:02.812 "nvme_admin": false, 00:14:02.812 "nvme_io": false, 00:14:02.812 "nvme_io_md": false, 00:14:02.812 "write_zeroes": true, 00:14:02.812 "zcopy": true, 00:14:02.812 "get_zone_info": false, 00:14:02.812 "zone_management": false, 00:14:02.812 "zone_append": false, 00:14:02.812 "compare": false, 00:14:02.812 "compare_and_write": false, 00:14:02.812 "abort": true, 00:14:02.812 "seek_hole": false, 00:14:02.812 "seek_data": false, 00:14:02.812 "copy": true, 00:14:02.812 "nvme_iov_md": false 00:14:02.812 }, 00:14:02.812 "memory_domains": [ 00:14:02.812 { 00:14:02.812 "dma_device_id": "system", 00:14:02.812 "dma_device_type": 1 00:14:02.812 }, 00:14:02.812 { 00:14:02.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.812 "dma_device_type": 2 00:14:02.812 } 00:14:02.812 ], 00:14:02.812 "driver_specific": {} 00:14:02.812 }' 00:14:02.812 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.812 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.812 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:02.812 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.812 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.071 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:03.071 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.071 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.071 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.071 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.071 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.071 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.071 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:03.071 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:03.071 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:03.331 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:03.331 "name": "BaseBdev3", 00:14:03.331 "aliases": [ 00:14:03.331 "99f0c36e-1d95-44fc-92cf-f2d65b19e2d2" 00:14:03.331 ], 00:14:03.331 "product_name": "Malloc disk", 00:14:03.331 "block_size": 512, 00:14:03.331 "num_blocks": 65536, 00:14:03.331 "uuid": "99f0c36e-1d95-44fc-92cf-f2d65b19e2d2", 00:14:03.331 "assigned_rate_limits": { 00:14:03.331 "rw_ios_per_sec": 0, 00:14:03.331 "rw_mbytes_per_sec": 0, 00:14:03.331 "r_mbytes_per_sec": 0, 00:14:03.331 "w_mbytes_per_sec": 0 00:14:03.331 }, 00:14:03.331 "claimed": true, 00:14:03.331 "claim_type": "exclusive_write", 00:14:03.331 "zoned": false, 00:14:03.331 "supported_io_types": { 00:14:03.331 "read": true, 00:14:03.331 "write": true, 00:14:03.331 "unmap": true, 00:14:03.331 "flush": true, 00:14:03.331 "reset": true, 00:14:03.331 "nvme_admin": false, 00:14:03.331 "nvme_io": false, 00:14:03.331 "nvme_io_md": false, 00:14:03.331 "write_zeroes": true, 00:14:03.331 "zcopy": true, 00:14:03.331 "get_zone_info": false, 00:14:03.331 "zone_management": false, 00:14:03.331 "zone_append": false, 00:14:03.331 "compare": false, 00:14:03.331 "compare_and_write": false, 00:14:03.331 "abort": true, 00:14:03.331 "seek_hole": false, 00:14:03.331 "seek_data": false, 00:14:03.331 "copy": true, 00:14:03.331 "nvme_iov_md": false 00:14:03.331 }, 00:14:03.331 "memory_domains": [ 00:14:03.331 { 00:14:03.331 "dma_device_id": "system", 00:14:03.331 "dma_device_type": 1 00:14:03.331 }, 00:14:03.331 { 00:14:03.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.331 "dma_device_type": 2 00:14:03.331 } 00:14:03.331 ], 00:14:03.331 "driver_specific": {} 00:14:03.331 }' 00:14:03.331 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.331 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.331 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:03.331 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.331 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.590 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:03.590 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.590 18:17:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.590 18:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.590 18:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.590 18:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.590 18:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.590 18:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:03.849 [2024-07-24 18:17:12.230831] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.849 [2024-07-24 18:17:12.230850] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.849 [2024-07-24 18:17:12.230888] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.849 [2024-07-24 18:17:12.231066] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.849 [2024-07-24 18:17:12.231073] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x175b120 name Existed_Raid, state offline 00:14:03.849 18:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2197688 00:14:03.849 18:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2197688 ']' 00:14:03.849 18:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2197688 00:14:03.849 18:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:03.849 18:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.849 18:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2197688 00:14:03.849 18:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:03.849 18:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:03.849 18:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2197688' 00:14:03.849 killing process with pid 2197688 00:14:03.849 18:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2197688 00:14:03.849 [2024-07-24 18:17:12.303791] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.849 18:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2197688 00:14:03.849 [2024-07-24 18:17:12.326015] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.108 18:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:04.109 00:14:04.109 real 0m21.472s 00:14:04.109 user 0m39.226s 00:14:04.109 sys 0m4.181s 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.109 ************************************ 00:14:04.109 END TEST raid_state_function_test 00:14:04.109 ************************************ 00:14:04.109 18:17:12 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:14:04.109 18:17:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:04.109 18:17:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.109 18:17:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.109 ************************************ 00:14:04.109 START TEST raid_state_function_test_sb 00:14:04.109 ************************************ 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2201989 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2201989' 00:14:04.109 Process raid pid: 2201989 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2201989 /var/tmp/spdk-raid.sock 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2201989 ']' 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:04.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:04.109 18:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.109 [2024-07-24 18:17:12.628243] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:14:04.109 [2024-07-24 18:17:12.628291] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:01.0 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:01.1 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:01.2 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:01.3 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:01.4 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:01.5 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:01.6 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:01.7 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:02.0 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:02.1 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:02.2 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:02.3 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:02.4 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:02.5 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:02.6 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b3:02.7 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:01.0 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:01.1 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:01.2 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:01.3 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:01.4 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:01.5 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:01.6 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:01.7 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:02.0 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:02.1 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:02.2 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:02.3 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:02.4 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:02.5 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:02.6 cannot be used 00:14:04.109 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.109 EAL: Requested device 0000:b5:02.7 cannot be used 00:14:04.369 [2024-07-24 18:17:12.718435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.369 [2024-07-24 18:17:12.788983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.369 [2024-07-24 18:17:12.845904] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.369 [2024-07-24 18:17:12.845930] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.939 18:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:04.939 18:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:04.939 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:05.198 [2024-07-24 18:17:13.569177] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:05.198 [2024-07-24 18:17:13.569204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:05.198 [2024-07-24 18:17:13.569211] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:05.198 [2024-07-24 18:17:13.569218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:05.198 [2024-07-24 18:17:13.569226] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:05.199 [2024-07-24 18:17:13.569233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:05.199 "name": "Existed_Raid", 00:14:05.199 "uuid": "e8699235-43db-415a-abca-29644f89b589", 00:14:05.199 "strip_size_kb": 0, 00:14:05.199 "state": "configuring", 00:14:05.199 "raid_level": "raid1", 00:14:05.199 "superblock": true, 00:14:05.199 "num_base_bdevs": 3, 00:14:05.199 "num_base_bdevs_discovered": 0, 00:14:05.199 "num_base_bdevs_operational": 3, 00:14:05.199 "base_bdevs_list": [ 00:14:05.199 { 00:14:05.199 "name": "BaseBdev1", 00:14:05.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.199 "is_configured": false, 00:14:05.199 "data_offset": 0, 00:14:05.199 "data_size": 0 00:14:05.199 }, 00:14:05.199 { 00:14:05.199 "name": "BaseBdev2", 00:14:05.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.199 "is_configured": false, 00:14:05.199 "data_offset": 0, 00:14:05.199 "data_size": 0 00:14:05.199 }, 00:14:05.199 { 00:14:05.199 "name": "BaseBdev3", 00:14:05.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.199 "is_configured": false, 00:14:05.199 "data_offset": 0, 00:14:05.199 "data_size": 0 00:14:05.199 } 00:14:05.199 ] 00:14:05.199 }' 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:05.199 18:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.768 18:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:06.027 [2024-07-24 18:17:14.379169] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:06.027 [2024-07-24 18:17:14.379190] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xff11c0 name Existed_Raid, state configuring 00:14:06.027 18:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:06.027 [2024-07-24 18:17:14.547620] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:06.027 [2024-07-24 18:17:14.547644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:06.027 [2024-07-24 18:17:14.547651] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:06.027 [2024-07-24 18:17:14.547658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:06.027 [2024-07-24 18:17:14.547664] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:06.027 [2024-07-24 18:17:14.547671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:06.027 18:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:06.287 [2024-07-24 18:17:14.708518] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.287 BaseBdev1 00:14:06.287 18:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:06.287 18:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:06.287 18:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:06.287 18:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:06.287 18:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:06.287 18:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:06.287 18:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:06.287 18:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:06.546 [ 00:14:06.546 { 00:14:06.546 "name": "BaseBdev1", 00:14:06.546 "aliases": [ 00:14:06.546 "5091fc75-5afb-46cd-b353-c5ce7b8fb2f8" 00:14:06.546 ], 00:14:06.547 "product_name": "Malloc disk", 00:14:06.547 "block_size": 512, 00:14:06.547 "num_blocks": 65536, 00:14:06.547 "uuid": "5091fc75-5afb-46cd-b353-c5ce7b8fb2f8", 00:14:06.547 "assigned_rate_limits": { 00:14:06.547 "rw_ios_per_sec": 0, 00:14:06.547 "rw_mbytes_per_sec": 0, 00:14:06.547 "r_mbytes_per_sec": 0, 00:14:06.547 "w_mbytes_per_sec": 0 00:14:06.547 }, 00:14:06.547 "claimed": true, 00:14:06.547 "claim_type": "exclusive_write", 00:14:06.547 "zoned": false, 00:14:06.547 "supported_io_types": { 00:14:06.547 "read": true, 00:14:06.547 "write": true, 00:14:06.547 "unmap": true, 00:14:06.547 "flush": true, 00:14:06.547 "reset": true, 00:14:06.547 "nvme_admin": false, 00:14:06.547 "nvme_io": false, 00:14:06.547 "nvme_io_md": false, 00:14:06.547 "write_zeroes": true, 00:14:06.547 "zcopy": true, 00:14:06.547 "get_zone_info": false, 00:14:06.547 "zone_management": false, 00:14:06.547 "zone_append": false, 00:14:06.547 "compare": false, 00:14:06.547 "compare_and_write": false, 00:14:06.547 "abort": true, 00:14:06.547 "seek_hole": false, 00:14:06.547 "seek_data": false, 00:14:06.547 "copy": true, 00:14:06.547 "nvme_iov_md": false 00:14:06.547 }, 00:14:06.547 "memory_domains": [ 00:14:06.547 { 00:14:06.547 "dma_device_id": "system", 00:14:06.547 "dma_device_type": 1 00:14:06.547 }, 00:14:06.547 { 00:14:06.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.547 "dma_device_type": 2 00:14:06.547 } 00:14:06.547 ], 00:14:06.547 "driver_specific": {} 00:14:06.547 } 00:14:06.547 ] 00:14:06.547 18:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:06.547 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:06.547 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:06.547 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:06.547 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:06.547 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:06.547 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:06.547 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:06.547 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:06.547 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:06.547 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:06.547 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.547 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.807 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.807 "name": "Existed_Raid", 00:14:06.807 "uuid": "9e43841f-b961-4c0b-a76c-fadd74b6580d", 00:14:06.807 "strip_size_kb": 0, 00:14:06.807 "state": "configuring", 00:14:06.807 "raid_level": "raid1", 00:14:06.807 "superblock": true, 00:14:06.807 "num_base_bdevs": 3, 00:14:06.807 "num_base_bdevs_discovered": 1, 00:14:06.807 "num_base_bdevs_operational": 3, 00:14:06.807 "base_bdevs_list": [ 00:14:06.807 { 00:14:06.807 "name": "BaseBdev1", 00:14:06.807 "uuid": "5091fc75-5afb-46cd-b353-c5ce7b8fb2f8", 00:14:06.807 "is_configured": true, 00:14:06.807 "data_offset": 2048, 00:14:06.807 "data_size": 63488 00:14:06.807 }, 00:14:06.807 { 00:14:06.807 "name": "BaseBdev2", 00:14:06.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.807 "is_configured": false, 00:14:06.807 "data_offset": 0, 00:14:06.807 "data_size": 0 00:14:06.807 }, 00:14:06.807 { 00:14:06.807 "name": "BaseBdev3", 00:14:06.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.807 "is_configured": false, 00:14:06.807 "data_offset": 0, 00:14:06.807 "data_size": 0 00:14:06.807 } 00:14:06.807 ] 00:14:06.807 }' 00:14:06.807 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.807 18:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.375 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:07.375 [2024-07-24 18:17:15.863497] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:07.375 [2024-07-24 18:17:15.863526] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xff0a90 name Existed_Raid, state configuring 00:14:07.375 18:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:07.634 [2024-07-24 18:17:16.035968] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.634 [2024-07-24 18:17:16.036973] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.634 [2024-07-24 18:17:16.036998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.634 [2024-07-24 18:17:16.037005] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:07.634 [2024-07-24 18:17:16.037012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:07.634 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:07.634 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:07.634 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:07.634 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:07.634 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:07.634 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:07.634 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:07.634 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:07.634 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.634 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.635 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.635 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.635 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.635 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.635 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:07.635 "name": "Existed_Raid", 00:14:07.635 "uuid": "48c023b9-885d-4b0d-a342-098c53f26d31", 00:14:07.635 "strip_size_kb": 0, 00:14:07.635 "state": "configuring", 00:14:07.635 "raid_level": "raid1", 00:14:07.635 "superblock": true, 00:14:07.635 "num_base_bdevs": 3, 00:14:07.635 "num_base_bdevs_discovered": 1, 00:14:07.635 "num_base_bdevs_operational": 3, 00:14:07.635 "base_bdevs_list": [ 00:14:07.635 { 00:14:07.635 "name": "BaseBdev1", 00:14:07.635 "uuid": "5091fc75-5afb-46cd-b353-c5ce7b8fb2f8", 00:14:07.635 "is_configured": true, 00:14:07.635 "data_offset": 2048, 00:14:07.635 "data_size": 63488 00:14:07.635 }, 00:14:07.635 { 00:14:07.635 "name": "BaseBdev2", 00:14:07.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.635 "is_configured": false, 00:14:07.635 "data_offset": 0, 00:14:07.635 "data_size": 0 00:14:07.635 }, 00:14:07.635 { 00:14:07.635 "name": "BaseBdev3", 00:14:07.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.635 "is_configured": false, 00:14:07.635 "data_offset": 0, 00:14:07.635 "data_size": 0 00:14:07.635 } 00:14:07.635 ] 00:14:07.635 }' 00:14:07.635 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:07.635 18:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.203 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:08.462 [2024-07-24 18:17:16.892957] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.462 BaseBdev2 00:14:08.462 18:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:08.462 18:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:08.462 18:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:08.462 18:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:08.462 18:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:08.462 18:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:08.462 18:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:08.722 18:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:08.722 [ 00:14:08.722 { 00:14:08.722 "name": "BaseBdev2", 00:14:08.722 "aliases": [ 00:14:08.722 "9df00f6d-847d-47c7-9e16-dce056b3ed45" 00:14:08.722 ], 00:14:08.722 "product_name": "Malloc disk", 00:14:08.722 "block_size": 512, 00:14:08.722 "num_blocks": 65536, 00:14:08.722 "uuid": "9df00f6d-847d-47c7-9e16-dce056b3ed45", 00:14:08.722 "assigned_rate_limits": { 00:14:08.722 "rw_ios_per_sec": 0, 00:14:08.722 "rw_mbytes_per_sec": 0, 00:14:08.722 "r_mbytes_per_sec": 0, 00:14:08.722 "w_mbytes_per_sec": 0 00:14:08.722 }, 00:14:08.722 "claimed": true, 00:14:08.722 "claim_type": "exclusive_write", 00:14:08.722 "zoned": false, 00:14:08.722 "supported_io_types": { 00:14:08.722 "read": true, 00:14:08.722 "write": true, 00:14:08.722 "unmap": true, 00:14:08.722 "flush": true, 00:14:08.722 "reset": true, 00:14:08.722 "nvme_admin": false, 00:14:08.722 "nvme_io": false, 00:14:08.722 "nvme_io_md": false, 00:14:08.722 "write_zeroes": true, 00:14:08.722 "zcopy": true, 00:14:08.722 "get_zone_info": false, 00:14:08.722 "zone_management": false, 00:14:08.722 "zone_append": false, 00:14:08.722 "compare": false, 00:14:08.722 "compare_and_write": false, 00:14:08.722 "abort": true, 00:14:08.722 "seek_hole": false, 00:14:08.722 "seek_data": false, 00:14:08.722 "copy": true, 00:14:08.722 "nvme_iov_md": false 00:14:08.723 }, 00:14:08.723 "memory_domains": [ 00:14:08.723 { 00:14:08.723 "dma_device_id": "system", 00:14:08.723 "dma_device_type": 1 00:14:08.723 }, 00:14:08.723 { 00:14:08.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.723 "dma_device_type": 2 00:14:08.723 } 00:14:08.723 ], 00:14:08.723 "driver_specific": {} 00:14:08.723 } 00:14:08.723 ] 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.723 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.982 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:08.982 "name": "Existed_Raid", 00:14:08.982 "uuid": "48c023b9-885d-4b0d-a342-098c53f26d31", 00:14:08.982 "strip_size_kb": 0, 00:14:08.982 "state": "configuring", 00:14:08.982 "raid_level": "raid1", 00:14:08.982 "superblock": true, 00:14:08.982 "num_base_bdevs": 3, 00:14:08.982 "num_base_bdevs_discovered": 2, 00:14:08.982 "num_base_bdevs_operational": 3, 00:14:08.982 "base_bdevs_list": [ 00:14:08.982 { 00:14:08.982 "name": "BaseBdev1", 00:14:08.982 "uuid": "5091fc75-5afb-46cd-b353-c5ce7b8fb2f8", 00:14:08.982 "is_configured": true, 00:14:08.982 "data_offset": 2048, 00:14:08.982 "data_size": 63488 00:14:08.982 }, 00:14:08.982 { 00:14:08.982 "name": "BaseBdev2", 00:14:08.982 "uuid": "9df00f6d-847d-47c7-9e16-dce056b3ed45", 00:14:08.982 "is_configured": true, 00:14:08.982 "data_offset": 2048, 00:14:08.982 "data_size": 63488 00:14:08.982 }, 00:14:08.982 { 00:14:08.982 "name": "BaseBdev3", 00:14:08.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.982 "is_configured": false, 00:14:08.982 "data_offset": 0, 00:14:08.982 "data_size": 0 00:14:08.982 } 00:14:08.982 ] 00:14:08.982 }' 00:14:08.982 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:08.982 18:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.552 18:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:09.552 [2024-07-24 18:17:18.054689] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:09.552 [2024-07-24 18:17:18.054800] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xff1980 00:14:09.552 [2024-07-24 18:17:18.054809] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:09.552 [2024-07-24 18:17:18.054918] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xff1650 00:14:09.552 [2024-07-24 18:17:18.055003] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xff1980 00:14:09.552 [2024-07-24 18:17:18.055009] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xff1980 00:14:09.552 [2024-07-24 18:17:18.055073] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.552 BaseBdev3 00:14:09.552 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:09.552 18:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:09.552 18:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:09.552 18:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:09.552 18:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:09.552 18:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:09.552 18:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:09.816 [ 00:14:09.816 { 00:14:09.816 "name": "BaseBdev3", 00:14:09.816 "aliases": [ 00:14:09.816 "09e1841b-bf57-436e-a860-3bc858bebb00" 00:14:09.816 ], 00:14:09.816 "product_name": "Malloc disk", 00:14:09.816 "block_size": 512, 00:14:09.816 "num_blocks": 65536, 00:14:09.816 "uuid": "09e1841b-bf57-436e-a860-3bc858bebb00", 00:14:09.816 "assigned_rate_limits": { 00:14:09.816 "rw_ios_per_sec": 0, 00:14:09.816 "rw_mbytes_per_sec": 0, 00:14:09.816 "r_mbytes_per_sec": 0, 00:14:09.816 "w_mbytes_per_sec": 0 00:14:09.816 }, 00:14:09.816 "claimed": true, 00:14:09.816 "claim_type": "exclusive_write", 00:14:09.816 "zoned": false, 00:14:09.816 "supported_io_types": { 00:14:09.816 "read": true, 00:14:09.816 "write": true, 00:14:09.816 "unmap": true, 00:14:09.816 "flush": true, 00:14:09.816 "reset": true, 00:14:09.816 "nvme_admin": false, 00:14:09.816 "nvme_io": false, 00:14:09.816 "nvme_io_md": false, 00:14:09.816 "write_zeroes": true, 00:14:09.816 "zcopy": true, 00:14:09.816 "get_zone_info": false, 00:14:09.816 "zone_management": false, 00:14:09.816 "zone_append": false, 00:14:09.816 "compare": false, 00:14:09.816 "compare_and_write": false, 00:14:09.816 "abort": true, 00:14:09.816 "seek_hole": false, 00:14:09.816 "seek_data": false, 00:14:09.816 "copy": true, 00:14:09.816 "nvme_iov_md": false 00:14:09.816 }, 00:14:09.816 "memory_domains": [ 00:14:09.816 { 00:14:09.816 "dma_device_id": "system", 00:14:09.816 "dma_device_type": 1 00:14:09.816 }, 00:14:09.816 { 00:14:09.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.816 "dma_device_type": 2 00:14:09.816 } 00:14:09.816 ], 00:14:09.816 "driver_specific": {} 00:14:09.816 } 00:14:09.816 ] 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.816 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.075 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.075 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.075 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:10.075 "name": "Existed_Raid", 00:14:10.075 "uuid": "48c023b9-885d-4b0d-a342-098c53f26d31", 00:14:10.075 "strip_size_kb": 0, 00:14:10.075 "state": "online", 00:14:10.075 "raid_level": "raid1", 00:14:10.075 "superblock": true, 00:14:10.075 "num_base_bdevs": 3, 00:14:10.075 "num_base_bdevs_discovered": 3, 00:14:10.075 "num_base_bdevs_operational": 3, 00:14:10.075 "base_bdevs_list": [ 00:14:10.075 { 00:14:10.075 "name": "BaseBdev1", 00:14:10.075 "uuid": "5091fc75-5afb-46cd-b353-c5ce7b8fb2f8", 00:14:10.075 "is_configured": true, 00:14:10.075 "data_offset": 2048, 00:14:10.075 "data_size": 63488 00:14:10.075 }, 00:14:10.075 { 00:14:10.075 "name": "BaseBdev2", 00:14:10.075 "uuid": "9df00f6d-847d-47c7-9e16-dce056b3ed45", 00:14:10.075 "is_configured": true, 00:14:10.075 "data_offset": 2048, 00:14:10.075 "data_size": 63488 00:14:10.075 }, 00:14:10.075 { 00:14:10.075 "name": "BaseBdev3", 00:14:10.075 "uuid": "09e1841b-bf57-436e-a860-3bc858bebb00", 00:14:10.075 "is_configured": true, 00:14:10.075 "data_offset": 2048, 00:14:10.075 "data_size": 63488 00:14:10.075 } 00:14:10.075 ] 00:14:10.075 }' 00:14:10.075 18:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:10.075 18:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.644 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:10.644 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:10.644 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:10.644 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:10.644 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:10.644 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:10.644 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:10.644 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:10.644 [2024-07-24 18:17:19.225886] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.903 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:10.903 "name": "Existed_Raid", 00:14:10.903 "aliases": [ 00:14:10.903 "48c023b9-885d-4b0d-a342-098c53f26d31" 00:14:10.903 ], 00:14:10.903 "product_name": "Raid Volume", 00:14:10.903 "block_size": 512, 00:14:10.903 "num_blocks": 63488, 00:14:10.903 "uuid": "48c023b9-885d-4b0d-a342-098c53f26d31", 00:14:10.903 "assigned_rate_limits": { 00:14:10.903 "rw_ios_per_sec": 0, 00:14:10.903 "rw_mbytes_per_sec": 0, 00:14:10.903 "r_mbytes_per_sec": 0, 00:14:10.903 "w_mbytes_per_sec": 0 00:14:10.903 }, 00:14:10.903 "claimed": false, 00:14:10.903 "zoned": false, 00:14:10.903 "supported_io_types": { 00:14:10.903 "read": true, 00:14:10.903 "write": true, 00:14:10.903 "unmap": false, 00:14:10.903 "flush": false, 00:14:10.903 "reset": true, 00:14:10.903 "nvme_admin": false, 00:14:10.903 "nvme_io": false, 00:14:10.903 "nvme_io_md": false, 00:14:10.903 "write_zeroes": true, 00:14:10.903 "zcopy": false, 00:14:10.903 "get_zone_info": false, 00:14:10.903 "zone_management": false, 00:14:10.903 "zone_append": false, 00:14:10.903 "compare": false, 00:14:10.903 "compare_and_write": false, 00:14:10.903 "abort": false, 00:14:10.903 "seek_hole": false, 00:14:10.903 "seek_data": false, 00:14:10.903 "copy": false, 00:14:10.903 "nvme_iov_md": false 00:14:10.903 }, 00:14:10.903 "memory_domains": [ 00:14:10.903 { 00:14:10.903 "dma_device_id": "system", 00:14:10.903 "dma_device_type": 1 00:14:10.903 }, 00:14:10.903 { 00:14:10.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.903 "dma_device_type": 2 00:14:10.903 }, 00:14:10.903 { 00:14:10.903 "dma_device_id": "system", 00:14:10.903 "dma_device_type": 1 00:14:10.903 }, 00:14:10.903 { 00:14:10.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.903 "dma_device_type": 2 00:14:10.903 }, 00:14:10.903 { 00:14:10.903 "dma_device_id": "system", 00:14:10.903 "dma_device_type": 1 00:14:10.903 }, 00:14:10.903 { 00:14:10.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.903 "dma_device_type": 2 00:14:10.903 } 00:14:10.903 ], 00:14:10.903 "driver_specific": { 00:14:10.903 "raid": { 00:14:10.903 "uuid": "48c023b9-885d-4b0d-a342-098c53f26d31", 00:14:10.903 "strip_size_kb": 0, 00:14:10.903 "state": "online", 00:14:10.903 "raid_level": "raid1", 00:14:10.903 "superblock": true, 00:14:10.903 "num_base_bdevs": 3, 00:14:10.903 "num_base_bdevs_discovered": 3, 00:14:10.903 "num_base_bdevs_operational": 3, 00:14:10.903 "base_bdevs_list": [ 00:14:10.903 { 00:14:10.903 "name": "BaseBdev1", 00:14:10.903 "uuid": "5091fc75-5afb-46cd-b353-c5ce7b8fb2f8", 00:14:10.903 "is_configured": true, 00:14:10.903 "data_offset": 2048, 00:14:10.903 "data_size": 63488 00:14:10.903 }, 00:14:10.903 { 00:14:10.903 "name": "BaseBdev2", 00:14:10.903 "uuid": "9df00f6d-847d-47c7-9e16-dce056b3ed45", 00:14:10.903 "is_configured": true, 00:14:10.903 "data_offset": 2048, 00:14:10.903 "data_size": 63488 00:14:10.903 }, 00:14:10.903 { 00:14:10.903 "name": "BaseBdev3", 00:14:10.903 "uuid": "09e1841b-bf57-436e-a860-3bc858bebb00", 00:14:10.903 "is_configured": true, 00:14:10.903 "data_offset": 2048, 00:14:10.903 "data_size": 63488 00:14:10.903 } 00:14:10.903 ] 00:14:10.903 } 00:14:10.903 } 00:14:10.903 }' 00:14:10.903 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:10.903 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:10.903 BaseBdev2 00:14:10.903 BaseBdev3' 00:14:10.903 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:10.903 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:10.903 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:10.903 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:10.903 "name": "BaseBdev1", 00:14:10.903 "aliases": [ 00:14:10.903 "5091fc75-5afb-46cd-b353-c5ce7b8fb2f8" 00:14:10.903 ], 00:14:10.903 "product_name": "Malloc disk", 00:14:10.903 "block_size": 512, 00:14:10.903 "num_blocks": 65536, 00:14:10.904 "uuid": "5091fc75-5afb-46cd-b353-c5ce7b8fb2f8", 00:14:10.904 "assigned_rate_limits": { 00:14:10.904 "rw_ios_per_sec": 0, 00:14:10.904 "rw_mbytes_per_sec": 0, 00:14:10.904 "r_mbytes_per_sec": 0, 00:14:10.904 "w_mbytes_per_sec": 0 00:14:10.904 }, 00:14:10.904 "claimed": true, 00:14:10.904 "claim_type": "exclusive_write", 00:14:10.904 "zoned": false, 00:14:10.904 "supported_io_types": { 00:14:10.904 "read": true, 00:14:10.904 "write": true, 00:14:10.904 "unmap": true, 00:14:10.904 "flush": true, 00:14:10.904 "reset": true, 00:14:10.904 "nvme_admin": false, 00:14:10.904 "nvme_io": false, 00:14:10.904 "nvme_io_md": false, 00:14:10.904 "write_zeroes": true, 00:14:10.904 "zcopy": true, 00:14:10.904 "get_zone_info": false, 00:14:10.904 "zone_management": false, 00:14:10.904 "zone_append": false, 00:14:10.904 "compare": false, 00:14:10.904 "compare_and_write": false, 00:14:10.904 "abort": true, 00:14:10.904 "seek_hole": false, 00:14:10.904 "seek_data": false, 00:14:10.904 "copy": true, 00:14:10.904 "nvme_iov_md": false 00:14:10.904 }, 00:14:10.904 "memory_domains": [ 00:14:10.904 { 00:14:10.904 "dma_device_id": "system", 00:14:10.904 "dma_device_type": 1 00:14:10.904 }, 00:14:10.904 { 00:14:10.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.904 "dma_device_type": 2 00:14:10.904 } 00:14:10.904 ], 00:14:10.904 "driver_specific": {} 00:14:10.904 }' 00:14:10.904 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.163 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.163 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:11.163 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.163 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.163 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:11.163 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.163 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.163 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:11.163 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.163 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.422 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:11.422 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:11.422 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:11.422 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:11.422 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:11.422 "name": "BaseBdev2", 00:14:11.422 "aliases": [ 00:14:11.422 "9df00f6d-847d-47c7-9e16-dce056b3ed45" 00:14:11.422 ], 00:14:11.422 "product_name": "Malloc disk", 00:14:11.422 "block_size": 512, 00:14:11.422 "num_blocks": 65536, 00:14:11.422 "uuid": "9df00f6d-847d-47c7-9e16-dce056b3ed45", 00:14:11.422 "assigned_rate_limits": { 00:14:11.422 "rw_ios_per_sec": 0, 00:14:11.422 "rw_mbytes_per_sec": 0, 00:14:11.422 "r_mbytes_per_sec": 0, 00:14:11.422 "w_mbytes_per_sec": 0 00:14:11.422 }, 00:14:11.422 "claimed": true, 00:14:11.422 "claim_type": "exclusive_write", 00:14:11.422 "zoned": false, 00:14:11.422 "supported_io_types": { 00:14:11.423 "read": true, 00:14:11.423 "write": true, 00:14:11.423 "unmap": true, 00:14:11.423 "flush": true, 00:14:11.423 "reset": true, 00:14:11.423 "nvme_admin": false, 00:14:11.423 "nvme_io": false, 00:14:11.423 "nvme_io_md": false, 00:14:11.423 "write_zeroes": true, 00:14:11.423 "zcopy": true, 00:14:11.423 "get_zone_info": false, 00:14:11.423 "zone_management": false, 00:14:11.423 "zone_append": false, 00:14:11.423 "compare": false, 00:14:11.423 "compare_and_write": false, 00:14:11.423 "abort": true, 00:14:11.423 "seek_hole": false, 00:14:11.423 "seek_data": false, 00:14:11.423 "copy": true, 00:14:11.423 "nvme_iov_md": false 00:14:11.423 }, 00:14:11.423 "memory_domains": [ 00:14:11.423 { 00:14:11.423 "dma_device_id": "system", 00:14:11.423 "dma_device_type": 1 00:14:11.423 }, 00:14:11.423 { 00:14:11.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.423 "dma_device_type": 2 00:14:11.423 } 00:14:11.423 ], 00:14:11.423 "driver_specific": {} 00:14:11.423 }' 00:14:11.423 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.423 18:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.681 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:11.681 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.681 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.681 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:11.681 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.681 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.681 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:11.681 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.681 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.681 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:11.681 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:11.682 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:11.682 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:11.941 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:11.941 "name": "BaseBdev3", 00:14:11.941 "aliases": [ 00:14:11.941 "09e1841b-bf57-436e-a860-3bc858bebb00" 00:14:11.941 ], 00:14:11.941 "product_name": "Malloc disk", 00:14:11.941 "block_size": 512, 00:14:11.941 "num_blocks": 65536, 00:14:11.941 "uuid": "09e1841b-bf57-436e-a860-3bc858bebb00", 00:14:11.941 "assigned_rate_limits": { 00:14:11.941 "rw_ios_per_sec": 0, 00:14:11.941 "rw_mbytes_per_sec": 0, 00:14:11.941 "r_mbytes_per_sec": 0, 00:14:11.941 "w_mbytes_per_sec": 0 00:14:11.941 }, 00:14:11.941 "claimed": true, 00:14:11.941 "claim_type": "exclusive_write", 00:14:11.941 "zoned": false, 00:14:11.941 "supported_io_types": { 00:14:11.941 "read": true, 00:14:11.941 "write": true, 00:14:11.941 "unmap": true, 00:14:11.941 "flush": true, 00:14:11.941 "reset": true, 00:14:11.941 "nvme_admin": false, 00:14:11.941 "nvme_io": false, 00:14:11.941 "nvme_io_md": false, 00:14:11.941 "write_zeroes": true, 00:14:11.941 "zcopy": true, 00:14:11.941 "get_zone_info": false, 00:14:11.941 "zone_management": false, 00:14:11.941 "zone_append": false, 00:14:11.941 "compare": false, 00:14:11.941 "compare_and_write": false, 00:14:11.941 "abort": true, 00:14:11.941 "seek_hole": false, 00:14:11.941 "seek_data": false, 00:14:11.941 "copy": true, 00:14:11.941 "nvme_iov_md": false 00:14:11.941 }, 00:14:11.941 "memory_domains": [ 00:14:11.941 { 00:14:11.941 "dma_device_id": "system", 00:14:11.941 "dma_device_type": 1 00:14:11.941 }, 00:14:11.941 { 00:14:11.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.941 "dma_device_type": 2 00:14:11.941 } 00:14:11.941 ], 00:14:11.941 "driver_specific": {} 00:14:11.941 }' 00:14:11.941 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.941 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.941 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:11.941 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:12.200 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:12.200 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:12.200 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.200 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:12.200 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:12.200 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.200 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:12.200 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:12.200 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:12.459 [2024-07-24 18:17:20.885999] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.459 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:12.459 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:14:12.459 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:12.459 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:14:12.459 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:14:12.459 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:12.459 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:12.459 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:12.459 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:12.459 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:12.459 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:12.460 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.460 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.460 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.460 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.460 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.460 18:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.719 18:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:12.719 "name": "Existed_Raid", 00:14:12.719 "uuid": "48c023b9-885d-4b0d-a342-098c53f26d31", 00:14:12.719 "strip_size_kb": 0, 00:14:12.719 "state": "online", 00:14:12.719 "raid_level": "raid1", 00:14:12.719 "superblock": true, 00:14:12.719 "num_base_bdevs": 3, 00:14:12.719 "num_base_bdevs_discovered": 2, 00:14:12.719 "num_base_bdevs_operational": 2, 00:14:12.719 "base_bdevs_list": [ 00:14:12.719 { 00:14:12.719 "name": null, 00:14:12.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.719 "is_configured": false, 00:14:12.719 "data_offset": 2048, 00:14:12.719 "data_size": 63488 00:14:12.719 }, 00:14:12.719 { 00:14:12.719 "name": "BaseBdev2", 00:14:12.719 "uuid": "9df00f6d-847d-47c7-9e16-dce056b3ed45", 00:14:12.719 "is_configured": true, 00:14:12.719 "data_offset": 2048, 00:14:12.719 "data_size": 63488 00:14:12.719 }, 00:14:12.719 { 00:14:12.719 "name": "BaseBdev3", 00:14:12.719 "uuid": "09e1841b-bf57-436e-a860-3bc858bebb00", 00:14:12.719 "is_configured": true, 00:14:12.719 "data_offset": 2048, 00:14:12.719 "data_size": 63488 00:14:12.719 } 00:14:12.719 ] 00:14:12.719 }' 00:14:12.719 18:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:12.719 18:17:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.978 18:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:12.978 18:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:12.978 18:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:12.978 18:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.238 18:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:13.238 18:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.238 18:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:13.497 [2024-07-24 18:17:21.889476] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:13.497 18:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:13.497 18:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:13.497 18:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.497 18:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:13.497 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:13.497 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.497 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:13.756 [2024-07-24 18:17:22.240233] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:13.756 [2024-07-24 18:17:22.240295] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.756 [2024-07-24 18:17:22.250057] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.756 [2024-07-24 18:17:22.250097] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.756 [2024-07-24 18:17:22.250104] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xff1980 name Existed_Raid, state offline 00:14:13.756 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:13.756 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:13.756 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.756 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:14.016 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:14.016 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:14.016 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:14.016 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:14.016 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:14.016 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:14.016 BaseBdev2 00:14:14.016 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:14.016 18:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:14.016 18:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:14.016 18:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:14.016 18:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:14.016 18:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:14.016 18:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.274 18:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:14.532 [ 00:14:14.532 { 00:14:14.532 "name": "BaseBdev2", 00:14:14.532 "aliases": [ 00:14:14.532 "45fd1b92-6e0a-4b7a-af9d-08fca4583655" 00:14:14.532 ], 00:14:14.532 "product_name": "Malloc disk", 00:14:14.532 "block_size": 512, 00:14:14.532 "num_blocks": 65536, 00:14:14.532 "uuid": "45fd1b92-6e0a-4b7a-af9d-08fca4583655", 00:14:14.532 "assigned_rate_limits": { 00:14:14.532 "rw_ios_per_sec": 0, 00:14:14.532 "rw_mbytes_per_sec": 0, 00:14:14.532 "r_mbytes_per_sec": 0, 00:14:14.532 "w_mbytes_per_sec": 0 00:14:14.532 }, 00:14:14.532 "claimed": false, 00:14:14.532 "zoned": false, 00:14:14.532 "supported_io_types": { 00:14:14.532 "read": true, 00:14:14.532 "write": true, 00:14:14.532 "unmap": true, 00:14:14.532 "flush": true, 00:14:14.533 "reset": true, 00:14:14.533 "nvme_admin": false, 00:14:14.533 "nvme_io": false, 00:14:14.533 "nvme_io_md": false, 00:14:14.533 "write_zeroes": true, 00:14:14.533 "zcopy": true, 00:14:14.533 "get_zone_info": false, 00:14:14.533 "zone_management": false, 00:14:14.533 "zone_append": false, 00:14:14.533 "compare": false, 00:14:14.533 "compare_and_write": false, 00:14:14.533 "abort": true, 00:14:14.533 "seek_hole": false, 00:14:14.533 "seek_data": false, 00:14:14.533 "copy": true, 00:14:14.533 "nvme_iov_md": false 00:14:14.533 }, 00:14:14.533 "memory_domains": [ 00:14:14.533 { 00:14:14.533 "dma_device_id": "system", 00:14:14.533 "dma_device_type": 1 00:14:14.533 }, 00:14:14.533 { 00:14:14.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.533 "dma_device_type": 2 00:14:14.533 } 00:14:14.533 ], 00:14:14.533 "driver_specific": {} 00:14:14.533 } 00:14:14.533 ] 00:14:14.533 18:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:14.533 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:14.533 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:14.533 18:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:14.533 BaseBdev3 00:14:14.533 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:14.533 18:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:14.533 18:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:14.533 18:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:14.533 18:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:14.533 18:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:14.533 18:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.792 18:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:15.050 [ 00:14:15.050 { 00:14:15.050 "name": "BaseBdev3", 00:14:15.050 "aliases": [ 00:14:15.050 "19f2f7f7-38e0-4728-b003-cb3be9930e28" 00:14:15.050 ], 00:14:15.050 "product_name": "Malloc disk", 00:14:15.050 "block_size": 512, 00:14:15.050 "num_blocks": 65536, 00:14:15.050 "uuid": "19f2f7f7-38e0-4728-b003-cb3be9930e28", 00:14:15.050 "assigned_rate_limits": { 00:14:15.050 "rw_ios_per_sec": 0, 00:14:15.050 "rw_mbytes_per_sec": 0, 00:14:15.050 "r_mbytes_per_sec": 0, 00:14:15.050 "w_mbytes_per_sec": 0 00:14:15.050 }, 00:14:15.050 "claimed": false, 00:14:15.050 "zoned": false, 00:14:15.050 "supported_io_types": { 00:14:15.050 "read": true, 00:14:15.050 "write": true, 00:14:15.050 "unmap": true, 00:14:15.050 "flush": true, 00:14:15.050 "reset": true, 00:14:15.050 "nvme_admin": false, 00:14:15.050 "nvme_io": false, 00:14:15.050 "nvme_io_md": false, 00:14:15.050 "write_zeroes": true, 00:14:15.050 "zcopy": true, 00:14:15.050 "get_zone_info": false, 00:14:15.050 "zone_management": false, 00:14:15.050 "zone_append": false, 00:14:15.050 "compare": false, 00:14:15.050 "compare_and_write": false, 00:14:15.050 "abort": true, 00:14:15.050 "seek_hole": false, 00:14:15.050 "seek_data": false, 00:14:15.050 "copy": true, 00:14:15.050 "nvme_iov_md": false 00:14:15.050 }, 00:14:15.050 "memory_domains": [ 00:14:15.050 { 00:14:15.050 "dma_device_id": "system", 00:14:15.050 "dma_device_type": 1 00:14:15.050 }, 00:14:15.050 { 00:14:15.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.050 "dma_device_type": 2 00:14:15.050 } 00:14:15.050 ], 00:14:15.050 "driver_specific": {} 00:14:15.050 } 00:14:15.050 ] 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:15.050 [2024-07-24 18:17:23.573096] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.050 [2024-07-24 18:17:23.573126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.050 [2024-07-24 18:17:23.573142] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.050 [2024-07-24 18:17:23.574058] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.050 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.309 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.309 "name": "Existed_Raid", 00:14:15.309 "uuid": "35202f6c-34c2-45aa-b7c2-cb91a106365b", 00:14:15.309 "strip_size_kb": 0, 00:14:15.309 "state": "configuring", 00:14:15.309 "raid_level": "raid1", 00:14:15.309 "superblock": true, 00:14:15.309 "num_base_bdevs": 3, 00:14:15.309 "num_base_bdevs_discovered": 2, 00:14:15.309 "num_base_bdevs_operational": 3, 00:14:15.309 "base_bdevs_list": [ 00:14:15.309 { 00:14:15.309 "name": "BaseBdev1", 00:14:15.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.309 "is_configured": false, 00:14:15.309 "data_offset": 0, 00:14:15.309 "data_size": 0 00:14:15.309 }, 00:14:15.309 { 00:14:15.309 "name": "BaseBdev2", 00:14:15.309 "uuid": "45fd1b92-6e0a-4b7a-af9d-08fca4583655", 00:14:15.309 "is_configured": true, 00:14:15.309 "data_offset": 2048, 00:14:15.309 "data_size": 63488 00:14:15.309 }, 00:14:15.309 { 00:14:15.309 "name": "BaseBdev3", 00:14:15.309 "uuid": "19f2f7f7-38e0-4728-b003-cb3be9930e28", 00:14:15.309 "is_configured": true, 00:14:15.309 "data_offset": 2048, 00:14:15.309 "data_size": 63488 00:14:15.309 } 00:14:15.309 ] 00:14:15.309 }' 00:14:15.309 18:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.309 18:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.877 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:15.877 [2024-07-24 18:17:24.359097] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:15.877 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:15.877 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.877 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:15.877 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:15.877 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:15.877 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:15.877 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.877 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.877 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.877 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.877 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.877 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.136 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:16.136 "name": "Existed_Raid", 00:14:16.136 "uuid": "35202f6c-34c2-45aa-b7c2-cb91a106365b", 00:14:16.136 "strip_size_kb": 0, 00:14:16.136 "state": "configuring", 00:14:16.136 "raid_level": "raid1", 00:14:16.136 "superblock": true, 00:14:16.136 "num_base_bdevs": 3, 00:14:16.136 "num_base_bdevs_discovered": 1, 00:14:16.136 "num_base_bdevs_operational": 3, 00:14:16.136 "base_bdevs_list": [ 00:14:16.136 { 00:14:16.136 "name": "BaseBdev1", 00:14:16.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.136 "is_configured": false, 00:14:16.136 "data_offset": 0, 00:14:16.136 "data_size": 0 00:14:16.136 }, 00:14:16.136 { 00:14:16.136 "name": null, 00:14:16.136 "uuid": "45fd1b92-6e0a-4b7a-af9d-08fca4583655", 00:14:16.136 "is_configured": false, 00:14:16.136 "data_offset": 2048, 00:14:16.136 "data_size": 63488 00:14:16.136 }, 00:14:16.136 { 00:14:16.136 "name": "BaseBdev3", 00:14:16.136 "uuid": "19f2f7f7-38e0-4728-b003-cb3be9930e28", 00:14:16.136 "is_configured": true, 00:14:16.136 "data_offset": 2048, 00:14:16.136 "data_size": 63488 00:14:16.136 } 00:14:16.136 ] 00:14:16.136 }' 00:14:16.136 18:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:16.136 18:17:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.704 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.704 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:16.704 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:16.704 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:16.962 [2024-07-24 18:17:25.376540] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.962 BaseBdev1 00:14:16.962 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:16.962 18:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:16.962 18:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:16.962 18:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:16.962 18:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:16.962 18:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:16.962 18:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:17.220 18:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:17.220 [ 00:14:17.220 { 00:14:17.220 "name": "BaseBdev1", 00:14:17.220 "aliases": [ 00:14:17.220 "76351e27-c16b-449f-bc90-f60741f15f33" 00:14:17.220 ], 00:14:17.220 "product_name": "Malloc disk", 00:14:17.220 "block_size": 512, 00:14:17.220 "num_blocks": 65536, 00:14:17.220 "uuid": "76351e27-c16b-449f-bc90-f60741f15f33", 00:14:17.220 "assigned_rate_limits": { 00:14:17.220 "rw_ios_per_sec": 0, 00:14:17.220 "rw_mbytes_per_sec": 0, 00:14:17.220 "r_mbytes_per_sec": 0, 00:14:17.220 "w_mbytes_per_sec": 0 00:14:17.220 }, 00:14:17.220 "claimed": true, 00:14:17.220 "claim_type": "exclusive_write", 00:14:17.220 "zoned": false, 00:14:17.220 "supported_io_types": { 00:14:17.220 "read": true, 00:14:17.221 "write": true, 00:14:17.221 "unmap": true, 00:14:17.221 "flush": true, 00:14:17.221 "reset": true, 00:14:17.221 "nvme_admin": false, 00:14:17.221 "nvme_io": false, 00:14:17.221 "nvme_io_md": false, 00:14:17.221 "write_zeroes": true, 00:14:17.221 "zcopy": true, 00:14:17.221 "get_zone_info": false, 00:14:17.221 "zone_management": false, 00:14:17.221 "zone_append": false, 00:14:17.221 "compare": false, 00:14:17.221 "compare_and_write": false, 00:14:17.221 "abort": true, 00:14:17.221 "seek_hole": false, 00:14:17.221 "seek_data": false, 00:14:17.221 "copy": true, 00:14:17.221 "nvme_iov_md": false 00:14:17.221 }, 00:14:17.221 "memory_domains": [ 00:14:17.221 { 00:14:17.221 "dma_device_id": "system", 00:14:17.221 "dma_device_type": 1 00:14:17.221 }, 00:14:17.221 { 00:14:17.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.221 "dma_device_type": 2 00:14:17.221 } 00:14:17.221 ], 00:14:17.221 "driver_specific": {} 00:14:17.221 } 00:14:17.221 ] 00:14:17.221 18:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:17.221 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:17.221 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:17.221 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:17.221 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:17.221 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:17.221 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:17.221 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:17.221 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:17.221 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:17.221 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:17.221 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.221 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.480 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:17.480 "name": "Existed_Raid", 00:14:17.480 "uuid": "35202f6c-34c2-45aa-b7c2-cb91a106365b", 00:14:17.480 "strip_size_kb": 0, 00:14:17.480 "state": "configuring", 00:14:17.480 "raid_level": "raid1", 00:14:17.480 "superblock": true, 00:14:17.480 "num_base_bdevs": 3, 00:14:17.480 "num_base_bdevs_discovered": 2, 00:14:17.480 "num_base_bdevs_operational": 3, 00:14:17.480 "base_bdevs_list": [ 00:14:17.480 { 00:14:17.480 "name": "BaseBdev1", 00:14:17.480 "uuid": "76351e27-c16b-449f-bc90-f60741f15f33", 00:14:17.480 "is_configured": true, 00:14:17.480 "data_offset": 2048, 00:14:17.480 "data_size": 63488 00:14:17.480 }, 00:14:17.480 { 00:14:17.480 "name": null, 00:14:17.480 "uuid": "45fd1b92-6e0a-4b7a-af9d-08fca4583655", 00:14:17.480 "is_configured": false, 00:14:17.480 "data_offset": 2048, 00:14:17.480 "data_size": 63488 00:14:17.480 }, 00:14:17.480 { 00:14:17.480 "name": "BaseBdev3", 00:14:17.480 "uuid": "19f2f7f7-38e0-4728-b003-cb3be9930e28", 00:14:17.480 "is_configured": true, 00:14:17.480 "data_offset": 2048, 00:14:17.480 "data_size": 63488 00:14:17.480 } 00:14:17.480 ] 00:14:17.480 }' 00:14:17.480 18:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:17.480 18:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.047 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.047 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:18.047 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:18.047 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:18.306 [2024-07-24 18:17:26.683923] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:18.306 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:18.306 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:18.306 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:18.306 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:18.306 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:18.307 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:18.307 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:18.307 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:18.307 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:18.307 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:18.307 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.307 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.307 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:18.307 "name": "Existed_Raid", 00:14:18.307 "uuid": "35202f6c-34c2-45aa-b7c2-cb91a106365b", 00:14:18.307 "strip_size_kb": 0, 00:14:18.307 "state": "configuring", 00:14:18.307 "raid_level": "raid1", 00:14:18.307 "superblock": true, 00:14:18.307 "num_base_bdevs": 3, 00:14:18.307 "num_base_bdevs_discovered": 1, 00:14:18.307 "num_base_bdevs_operational": 3, 00:14:18.307 "base_bdevs_list": [ 00:14:18.307 { 00:14:18.307 "name": "BaseBdev1", 00:14:18.307 "uuid": "76351e27-c16b-449f-bc90-f60741f15f33", 00:14:18.307 "is_configured": true, 00:14:18.307 "data_offset": 2048, 00:14:18.307 "data_size": 63488 00:14:18.307 }, 00:14:18.307 { 00:14:18.307 "name": null, 00:14:18.307 "uuid": "45fd1b92-6e0a-4b7a-af9d-08fca4583655", 00:14:18.307 "is_configured": false, 00:14:18.307 "data_offset": 2048, 00:14:18.307 "data_size": 63488 00:14:18.307 }, 00:14:18.307 { 00:14:18.307 "name": null, 00:14:18.307 "uuid": "19f2f7f7-38e0-4728-b003-cb3be9930e28", 00:14:18.307 "is_configured": false, 00:14:18.307 "data_offset": 2048, 00:14:18.307 "data_size": 63488 00:14:18.307 } 00:14:18.307 ] 00:14:18.307 }' 00:14:18.307 18:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:18.307 18:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.874 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.874 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:19.133 [2024-07-24 18:17:27.666490] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.133 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.392 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.392 "name": "Existed_Raid", 00:14:19.392 "uuid": "35202f6c-34c2-45aa-b7c2-cb91a106365b", 00:14:19.392 "strip_size_kb": 0, 00:14:19.392 "state": "configuring", 00:14:19.392 "raid_level": "raid1", 00:14:19.392 "superblock": true, 00:14:19.392 "num_base_bdevs": 3, 00:14:19.392 "num_base_bdevs_discovered": 2, 00:14:19.392 "num_base_bdevs_operational": 3, 00:14:19.392 "base_bdevs_list": [ 00:14:19.392 { 00:14:19.392 "name": "BaseBdev1", 00:14:19.392 "uuid": "76351e27-c16b-449f-bc90-f60741f15f33", 00:14:19.392 "is_configured": true, 00:14:19.392 "data_offset": 2048, 00:14:19.392 "data_size": 63488 00:14:19.392 }, 00:14:19.392 { 00:14:19.392 "name": null, 00:14:19.392 "uuid": "45fd1b92-6e0a-4b7a-af9d-08fca4583655", 00:14:19.392 "is_configured": false, 00:14:19.392 "data_offset": 2048, 00:14:19.392 "data_size": 63488 00:14:19.392 }, 00:14:19.392 { 00:14:19.392 "name": "BaseBdev3", 00:14:19.392 "uuid": "19f2f7f7-38e0-4728-b003-cb3be9930e28", 00:14:19.392 "is_configured": true, 00:14:19.392 "data_offset": 2048, 00:14:19.392 "data_size": 63488 00:14:19.392 } 00:14:19.392 ] 00:14:19.392 }' 00:14:19.392 18:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.392 18:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.958 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.958 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:19.958 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:19.958 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:20.217 [2024-07-24 18:17:28.629047] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:20.217 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:20.217 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:20.217 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:20.217 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:20.217 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:20.217 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:20.217 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:20.217 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:20.217 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:20.217 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:20.217 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.217 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.476 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:20.476 "name": "Existed_Raid", 00:14:20.476 "uuid": "35202f6c-34c2-45aa-b7c2-cb91a106365b", 00:14:20.476 "strip_size_kb": 0, 00:14:20.476 "state": "configuring", 00:14:20.476 "raid_level": "raid1", 00:14:20.476 "superblock": true, 00:14:20.476 "num_base_bdevs": 3, 00:14:20.476 "num_base_bdevs_discovered": 1, 00:14:20.476 "num_base_bdevs_operational": 3, 00:14:20.476 "base_bdevs_list": [ 00:14:20.476 { 00:14:20.476 "name": null, 00:14:20.476 "uuid": "76351e27-c16b-449f-bc90-f60741f15f33", 00:14:20.476 "is_configured": false, 00:14:20.476 "data_offset": 2048, 00:14:20.477 "data_size": 63488 00:14:20.477 }, 00:14:20.477 { 00:14:20.477 "name": null, 00:14:20.477 "uuid": "45fd1b92-6e0a-4b7a-af9d-08fca4583655", 00:14:20.477 "is_configured": false, 00:14:20.477 "data_offset": 2048, 00:14:20.477 "data_size": 63488 00:14:20.477 }, 00:14:20.477 { 00:14:20.477 "name": "BaseBdev3", 00:14:20.477 "uuid": "19f2f7f7-38e0-4728-b003-cb3be9930e28", 00:14:20.477 "is_configured": true, 00:14:20.477 "data_offset": 2048, 00:14:20.477 "data_size": 63488 00:14:20.477 } 00:14:20.477 ] 00:14:20.477 }' 00:14:20.477 18:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:20.477 18:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.736 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.736 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:20.994 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:20.994 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:21.254 [2024-07-24 18:17:29.653419] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:21.254 "name": "Existed_Raid", 00:14:21.254 "uuid": "35202f6c-34c2-45aa-b7c2-cb91a106365b", 00:14:21.254 "strip_size_kb": 0, 00:14:21.254 "state": "configuring", 00:14:21.254 "raid_level": "raid1", 00:14:21.254 "superblock": true, 00:14:21.254 "num_base_bdevs": 3, 00:14:21.254 "num_base_bdevs_discovered": 2, 00:14:21.254 "num_base_bdevs_operational": 3, 00:14:21.254 "base_bdevs_list": [ 00:14:21.254 { 00:14:21.254 "name": null, 00:14:21.254 "uuid": "76351e27-c16b-449f-bc90-f60741f15f33", 00:14:21.254 "is_configured": false, 00:14:21.254 "data_offset": 2048, 00:14:21.254 "data_size": 63488 00:14:21.254 }, 00:14:21.254 { 00:14:21.254 "name": "BaseBdev2", 00:14:21.254 "uuid": "45fd1b92-6e0a-4b7a-af9d-08fca4583655", 00:14:21.254 "is_configured": true, 00:14:21.254 "data_offset": 2048, 00:14:21.254 "data_size": 63488 00:14:21.254 }, 00:14:21.254 { 00:14:21.254 "name": "BaseBdev3", 00:14:21.254 "uuid": "19f2f7f7-38e0-4728-b003-cb3be9930e28", 00:14:21.254 "is_configured": true, 00:14:21.254 "data_offset": 2048, 00:14:21.254 "data_size": 63488 00:14:21.254 } 00:14:21.254 ] 00:14:21.254 }' 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:21.254 18:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.822 18:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.822 18:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:22.081 18:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:22.081 18:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.081 18:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:22.081 18:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 76351e27-c16b-449f-bc90-f60741f15f33 00:14:22.340 [2024-07-24 18:17:30.819297] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:22.340 [2024-07-24 18:17:30.819403] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x119f300 00:14:22.340 [2024-07-24 18:17:30.819412] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:22.340 [2024-07-24 18:17:30.819536] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xff1190 00:14:22.340 [2024-07-24 18:17:30.819619] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x119f300 00:14:22.340 [2024-07-24 18:17:30.819633] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x119f300 00:14:22.340 [2024-07-24 18:17:30.819701] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.340 NewBaseBdev 00:14:22.340 18:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:22.340 18:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:22.340 18:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:22.340 18:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:22.340 18:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:22.340 18:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:22.340 18:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:22.600 [ 00:14:22.600 { 00:14:22.600 "name": "NewBaseBdev", 00:14:22.600 "aliases": [ 00:14:22.600 "76351e27-c16b-449f-bc90-f60741f15f33" 00:14:22.600 ], 00:14:22.600 "product_name": "Malloc disk", 00:14:22.600 "block_size": 512, 00:14:22.600 "num_blocks": 65536, 00:14:22.600 "uuid": "76351e27-c16b-449f-bc90-f60741f15f33", 00:14:22.600 "assigned_rate_limits": { 00:14:22.600 "rw_ios_per_sec": 0, 00:14:22.600 "rw_mbytes_per_sec": 0, 00:14:22.600 "r_mbytes_per_sec": 0, 00:14:22.600 "w_mbytes_per_sec": 0 00:14:22.600 }, 00:14:22.600 "claimed": true, 00:14:22.600 "claim_type": "exclusive_write", 00:14:22.600 "zoned": false, 00:14:22.600 "supported_io_types": { 00:14:22.600 "read": true, 00:14:22.600 "write": true, 00:14:22.600 "unmap": true, 00:14:22.600 "flush": true, 00:14:22.600 "reset": true, 00:14:22.600 "nvme_admin": false, 00:14:22.600 "nvme_io": false, 00:14:22.600 "nvme_io_md": false, 00:14:22.600 "write_zeroes": true, 00:14:22.600 "zcopy": true, 00:14:22.600 "get_zone_info": false, 00:14:22.600 "zone_management": false, 00:14:22.600 "zone_append": false, 00:14:22.600 "compare": false, 00:14:22.600 "compare_and_write": false, 00:14:22.600 "abort": true, 00:14:22.600 "seek_hole": false, 00:14:22.600 "seek_data": false, 00:14:22.600 "copy": true, 00:14:22.600 "nvme_iov_md": false 00:14:22.600 }, 00:14:22.600 "memory_domains": [ 00:14:22.600 { 00:14:22.600 "dma_device_id": "system", 00:14:22.600 "dma_device_type": 1 00:14:22.600 }, 00:14:22.600 { 00:14:22.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.600 "dma_device_type": 2 00:14:22.600 } 00:14:22.600 ], 00:14:22.600 "driver_specific": {} 00:14:22.600 } 00:14:22.600 ] 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.600 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.859 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.859 "name": "Existed_Raid", 00:14:22.859 "uuid": "35202f6c-34c2-45aa-b7c2-cb91a106365b", 00:14:22.859 "strip_size_kb": 0, 00:14:22.859 "state": "online", 00:14:22.859 "raid_level": "raid1", 00:14:22.859 "superblock": true, 00:14:22.859 "num_base_bdevs": 3, 00:14:22.859 "num_base_bdevs_discovered": 3, 00:14:22.859 "num_base_bdevs_operational": 3, 00:14:22.859 "base_bdevs_list": [ 00:14:22.859 { 00:14:22.859 "name": "NewBaseBdev", 00:14:22.859 "uuid": "76351e27-c16b-449f-bc90-f60741f15f33", 00:14:22.859 "is_configured": true, 00:14:22.859 "data_offset": 2048, 00:14:22.859 "data_size": 63488 00:14:22.859 }, 00:14:22.859 { 00:14:22.859 "name": "BaseBdev2", 00:14:22.859 "uuid": "45fd1b92-6e0a-4b7a-af9d-08fca4583655", 00:14:22.859 "is_configured": true, 00:14:22.859 "data_offset": 2048, 00:14:22.859 "data_size": 63488 00:14:22.859 }, 00:14:22.859 { 00:14:22.859 "name": "BaseBdev3", 00:14:22.859 "uuid": "19f2f7f7-38e0-4728-b003-cb3be9930e28", 00:14:22.859 "is_configured": true, 00:14:22.859 "data_offset": 2048, 00:14:22.859 "data_size": 63488 00:14:22.859 } 00:14:22.859 ] 00:14:22.859 }' 00:14:22.859 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.860 18:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.428 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:23.428 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:23.428 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:23.428 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:23.429 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:23.429 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:23.429 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:23.429 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:23.429 [2024-07-24 18:17:31.962432] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.429 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:23.429 "name": "Existed_Raid", 00:14:23.429 "aliases": [ 00:14:23.429 "35202f6c-34c2-45aa-b7c2-cb91a106365b" 00:14:23.429 ], 00:14:23.429 "product_name": "Raid Volume", 00:14:23.429 "block_size": 512, 00:14:23.429 "num_blocks": 63488, 00:14:23.429 "uuid": "35202f6c-34c2-45aa-b7c2-cb91a106365b", 00:14:23.429 "assigned_rate_limits": { 00:14:23.429 "rw_ios_per_sec": 0, 00:14:23.429 "rw_mbytes_per_sec": 0, 00:14:23.429 "r_mbytes_per_sec": 0, 00:14:23.429 "w_mbytes_per_sec": 0 00:14:23.429 }, 00:14:23.429 "claimed": false, 00:14:23.429 "zoned": false, 00:14:23.429 "supported_io_types": { 00:14:23.429 "read": true, 00:14:23.429 "write": true, 00:14:23.429 "unmap": false, 00:14:23.429 "flush": false, 00:14:23.429 "reset": true, 00:14:23.429 "nvme_admin": false, 00:14:23.429 "nvme_io": false, 00:14:23.429 "nvme_io_md": false, 00:14:23.429 "write_zeroes": true, 00:14:23.429 "zcopy": false, 00:14:23.429 "get_zone_info": false, 00:14:23.429 "zone_management": false, 00:14:23.429 "zone_append": false, 00:14:23.429 "compare": false, 00:14:23.429 "compare_and_write": false, 00:14:23.429 "abort": false, 00:14:23.429 "seek_hole": false, 00:14:23.429 "seek_data": false, 00:14:23.429 "copy": false, 00:14:23.429 "nvme_iov_md": false 00:14:23.429 }, 00:14:23.429 "memory_domains": [ 00:14:23.429 { 00:14:23.429 "dma_device_id": "system", 00:14:23.429 "dma_device_type": 1 00:14:23.429 }, 00:14:23.429 { 00:14:23.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.429 "dma_device_type": 2 00:14:23.429 }, 00:14:23.429 { 00:14:23.429 "dma_device_id": "system", 00:14:23.429 "dma_device_type": 1 00:14:23.429 }, 00:14:23.429 { 00:14:23.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.429 "dma_device_type": 2 00:14:23.429 }, 00:14:23.429 { 00:14:23.429 "dma_device_id": "system", 00:14:23.429 "dma_device_type": 1 00:14:23.429 }, 00:14:23.429 { 00:14:23.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.429 "dma_device_type": 2 00:14:23.429 } 00:14:23.429 ], 00:14:23.429 "driver_specific": { 00:14:23.429 "raid": { 00:14:23.429 "uuid": "35202f6c-34c2-45aa-b7c2-cb91a106365b", 00:14:23.429 "strip_size_kb": 0, 00:14:23.429 "state": "online", 00:14:23.429 "raid_level": "raid1", 00:14:23.429 "superblock": true, 00:14:23.429 "num_base_bdevs": 3, 00:14:23.429 "num_base_bdevs_discovered": 3, 00:14:23.429 "num_base_bdevs_operational": 3, 00:14:23.429 "base_bdevs_list": [ 00:14:23.429 { 00:14:23.429 "name": "NewBaseBdev", 00:14:23.429 "uuid": "76351e27-c16b-449f-bc90-f60741f15f33", 00:14:23.429 "is_configured": true, 00:14:23.429 "data_offset": 2048, 00:14:23.429 "data_size": 63488 00:14:23.429 }, 00:14:23.429 { 00:14:23.429 "name": "BaseBdev2", 00:14:23.429 "uuid": "45fd1b92-6e0a-4b7a-af9d-08fca4583655", 00:14:23.429 "is_configured": true, 00:14:23.429 "data_offset": 2048, 00:14:23.429 "data_size": 63488 00:14:23.429 }, 00:14:23.429 { 00:14:23.429 "name": "BaseBdev3", 00:14:23.429 "uuid": "19f2f7f7-38e0-4728-b003-cb3be9930e28", 00:14:23.429 "is_configured": true, 00:14:23.429 "data_offset": 2048, 00:14:23.429 "data_size": 63488 00:14:23.429 } 00:14:23.429 ] 00:14:23.429 } 00:14:23.429 } 00:14:23.429 }' 00:14:23.429 18:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.688 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:23.688 BaseBdev2 00:14:23.688 BaseBdev3' 00:14:23.688 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.689 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:23.689 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.689 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.689 "name": "NewBaseBdev", 00:14:23.689 "aliases": [ 00:14:23.689 "76351e27-c16b-449f-bc90-f60741f15f33" 00:14:23.689 ], 00:14:23.689 "product_name": "Malloc disk", 00:14:23.689 "block_size": 512, 00:14:23.689 "num_blocks": 65536, 00:14:23.689 "uuid": "76351e27-c16b-449f-bc90-f60741f15f33", 00:14:23.689 "assigned_rate_limits": { 00:14:23.689 "rw_ios_per_sec": 0, 00:14:23.689 "rw_mbytes_per_sec": 0, 00:14:23.689 "r_mbytes_per_sec": 0, 00:14:23.689 "w_mbytes_per_sec": 0 00:14:23.689 }, 00:14:23.689 "claimed": true, 00:14:23.689 "claim_type": "exclusive_write", 00:14:23.689 "zoned": false, 00:14:23.689 "supported_io_types": { 00:14:23.689 "read": true, 00:14:23.689 "write": true, 00:14:23.689 "unmap": true, 00:14:23.689 "flush": true, 00:14:23.689 "reset": true, 00:14:23.689 "nvme_admin": false, 00:14:23.689 "nvme_io": false, 00:14:23.689 "nvme_io_md": false, 00:14:23.689 "write_zeroes": true, 00:14:23.689 "zcopy": true, 00:14:23.689 "get_zone_info": false, 00:14:23.689 "zone_management": false, 00:14:23.689 "zone_append": false, 00:14:23.689 "compare": false, 00:14:23.689 "compare_and_write": false, 00:14:23.689 "abort": true, 00:14:23.689 "seek_hole": false, 00:14:23.689 "seek_data": false, 00:14:23.689 "copy": true, 00:14:23.689 "nvme_iov_md": false 00:14:23.689 }, 00:14:23.689 "memory_domains": [ 00:14:23.689 { 00:14:23.689 "dma_device_id": "system", 00:14:23.689 "dma_device_type": 1 00:14:23.689 }, 00:14:23.689 { 00:14:23.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.689 "dma_device_type": 2 00:14:23.689 } 00:14:23.689 ], 00:14:23.689 "driver_specific": {} 00:14:23.689 }' 00:14:23.689 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.689 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.689 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.689 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.949 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.949 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:23.949 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.949 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.949 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.949 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.949 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.949 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.949 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.949 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:23.949 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.208 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.208 "name": "BaseBdev2", 00:14:24.208 "aliases": [ 00:14:24.208 "45fd1b92-6e0a-4b7a-af9d-08fca4583655" 00:14:24.208 ], 00:14:24.208 "product_name": "Malloc disk", 00:14:24.208 "block_size": 512, 00:14:24.208 "num_blocks": 65536, 00:14:24.208 "uuid": "45fd1b92-6e0a-4b7a-af9d-08fca4583655", 00:14:24.208 "assigned_rate_limits": { 00:14:24.208 "rw_ios_per_sec": 0, 00:14:24.208 "rw_mbytes_per_sec": 0, 00:14:24.208 "r_mbytes_per_sec": 0, 00:14:24.208 "w_mbytes_per_sec": 0 00:14:24.208 }, 00:14:24.208 "claimed": true, 00:14:24.208 "claim_type": "exclusive_write", 00:14:24.208 "zoned": false, 00:14:24.208 "supported_io_types": { 00:14:24.208 "read": true, 00:14:24.208 "write": true, 00:14:24.208 "unmap": true, 00:14:24.208 "flush": true, 00:14:24.208 "reset": true, 00:14:24.208 "nvme_admin": false, 00:14:24.208 "nvme_io": false, 00:14:24.208 "nvme_io_md": false, 00:14:24.208 "write_zeroes": true, 00:14:24.208 "zcopy": true, 00:14:24.208 "get_zone_info": false, 00:14:24.208 "zone_management": false, 00:14:24.208 "zone_append": false, 00:14:24.208 "compare": false, 00:14:24.208 "compare_and_write": false, 00:14:24.208 "abort": true, 00:14:24.208 "seek_hole": false, 00:14:24.208 "seek_data": false, 00:14:24.208 "copy": true, 00:14:24.208 "nvme_iov_md": false 00:14:24.208 }, 00:14:24.208 "memory_domains": [ 00:14:24.208 { 00:14:24.208 "dma_device_id": "system", 00:14:24.208 "dma_device_type": 1 00:14:24.208 }, 00:14:24.208 { 00:14:24.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.208 "dma_device_type": 2 00:14:24.208 } 00:14:24.208 ], 00:14:24.208 "driver_specific": {} 00:14:24.208 }' 00:14:24.208 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.208 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.208 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.208 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.208 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.208 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.468 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.468 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.468 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:24.468 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.468 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:24.468 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:24.468 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:24.468 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:24.468 18:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:24.729 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:24.729 "name": "BaseBdev3", 00:14:24.729 "aliases": [ 00:14:24.729 "19f2f7f7-38e0-4728-b003-cb3be9930e28" 00:14:24.729 ], 00:14:24.729 "product_name": "Malloc disk", 00:14:24.729 "block_size": 512, 00:14:24.729 "num_blocks": 65536, 00:14:24.729 "uuid": "19f2f7f7-38e0-4728-b003-cb3be9930e28", 00:14:24.729 "assigned_rate_limits": { 00:14:24.729 "rw_ios_per_sec": 0, 00:14:24.729 "rw_mbytes_per_sec": 0, 00:14:24.729 "r_mbytes_per_sec": 0, 00:14:24.729 "w_mbytes_per_sec": 0 00:14:24.729 }, 00:14:24.729 "claimed": true, 00:14:24.729 "claim_type": "exclusive_write", 00:14:24.729 "zoned": false, 00:14:24.729 "supported_io_types": { 00:14:24.729 "read": true, 00:14:24.729 "write": true, 00:14:24.729 "unmap": true, 00:14:24.729 "flush": true, 00:14:24.729 "reset": true, 00:14:24.729 "nvme_admin": false, 00:14:24.729 "nvme_io": false, 00:14:24.729 "nvme_io_md": false, 00:14:24.729 "write_zeroes": true, 00:14:24.729 "zcopy": true, 00:14:24.729 "get_zone_info": false, 00:14:24.729 "zone_management": false, 00:14:24.729 "zone_append": false, 00:14:24.729 "compare": false, 00:14:24.729 "compare_and_write": false, 00:14:24.729 "abort": true, 00:14:24.729 "seek_hole": false, 00:14:24.729 "seek_data": false, 00:14:24.729 "copy": true, 00:14:24.729 "nvme_iov_md": false 00:14:24.729 }, 00:14:24.729 "memory_domains": [ 00:14:24.729 { 00:14:24.729 "dma_device_id": "system", 00:14:24.729 "dma_device_type": 1 00:14:24.729 }, 00:14:24.729 { 00:14:24.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.729 "dma_device_type": 2 00:14:24.729 } 00:14:24.729 ], 00:14:24.729 "driver_specific": {} 00:14:24.729 }' 00:14:24.729 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.729 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:24.729 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:24.730 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.730 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:24.730 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:24.730 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:24.730 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:25.037 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:25.037 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:25.037 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:25.037 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:25.037 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:25.037 [2024-07-24 18:17:33.566392] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.037 [2024-07-24 18:17:33.566410] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.037 [2024-07-24 18:17:33.566445] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.037 [2024-07-24 18:17:33.566621] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.037 [2024-07-24 18:17:33.566634] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x119f300 name Existed_Raid, state offline 00:14:25.037 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2201989 00:14:25.037 18:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2201989 ']' 00:14:25.037 18:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2201989 00:14:25.037 18:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:25.037 18:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:25.037 18:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2201989 00:14:25.296 18:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:25.296 18:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:25.296 18:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2201989' 00:14:25.296 killing process with pid 2201989 00:14:25.296 18:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2201989 00:14:25.296 [2024-07-24 18:17:33.637016] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.296 18:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2201989 00:14:25.296 [2024-07-24 18:17:33.659017] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.296 18:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:25.296 00:14:25.296 real 0m21.248s 00:14:25.296 user 0m38.800s 00:14:25.296 sys 0m4.101s 00:14:25.296 18:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:25.296 18:17:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.296 ************************************ 00:14:25.296 END TEST raid_state_function_test_sb 00:14:25.296 ************************************ 00:14:25.296 18:17:33 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:14:25.296 18:17:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:25.296 18:17:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:25.296 18:17:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.556 ************************************ 00:14:25.556 START TEST raid_superblock_test 00:14:25.556 ************************************ 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2206137 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2206137 /var/tmp/spdk-raid.sock 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2206137 ']' 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:25.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.556 18:17:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.556 [2024-07-24 18:17:33.968090] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:14:25.556 [2024-07-24 18:17:33.968139] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2206137 ] 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:01.0 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:01.1 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:01.2 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:01.3 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:01.4 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:01.5 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:01.6 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:01.7 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:02.0 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:02.1 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:02.2 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:02.3 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:02.4 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:02.5 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:02.6 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b3:02.7 cannot be used 00:14:25.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.556 EAL: Requested device 0000:b5:01.0 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:01.1 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:01.2 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:01.3 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:01.4 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:01.5 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:01.6 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:01.7 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:02.0 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:02.1 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:02.2 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:02.3 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:02.4 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:02.5 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:02.6 cannot be used 00:14:25.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:25.557 EAL: Requested device 0000:b5:02.7 cannot be used 00:14:25.557 [2024-07-24 18:17:34.060601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.557 [2024-07-24 18:17:34.134425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.816 [2024-07-24 18:17:34.189031] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.816 [2024-07-24 18:17:34.189059] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.385 18:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.385 18:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:26.385 18:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:26.385 18:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:26.385 18:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:26.385 18:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:26.385 18:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:26.385 18:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.385 18:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.385 18:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.385 18:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:26.385 malloc1 00:14:26.385 18:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:26.644 [2024-07-24 18:17:35.069619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:26.644 [2024-07-24 18:17:35.069659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.644 [2024-07-24 18:17:35.069673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x136fcb0 00:14:26.644 [2024-07-24 18:17:35.069682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.644 [2024-07-24 18:17:35.070795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.644 [2024-07-24 18:17:35.070818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:26.644 pt1 00:14:26.644 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:26.644 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:26.644 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:26.644 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:26.644 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:26.644 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.644 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.644 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.644 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:26.904 malloc2 00:14:26.904 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:26.904 [2024-07-24 18:17:35.418219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:26.904 [2024-07-24 18:17:35.418251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.904 [2024-07-24 18:17:35.418263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13710b0 00:14:26.904 [2024-07-24 18:17:35.418271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.904 [2024-07-24 18:17:35.419270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.904 [2024-07-24 18:17:35.419291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:26.904 pt2 00:14:26.904 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:26.904 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:26.905 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:14:26.905 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:14:26.905 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:26.905 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:26.905 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:26.905 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:26.905 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:27.164 malloc3 00:14:27.164 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:27.423 [2024-07-24 18:17:35.762782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:27.423 [2024-07-24 18:17:35.762816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.423 [2024-07-24 18:17:35.762829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1507a80 00:14:27.423 [2024-07-24 18:17:35.762837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.423 [2024-07-24 18:17:35.763858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.423 [2024-07-24 18:17:35.763881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:27.423 pt3 00:14:27.423 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:27.423 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:27.423 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:27.423 [2024-07-24 18:17:35.931237] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:27.423 [2024-07-24 18:17:35.932087] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:27.423 [2024-07-24 18:17:35.932124] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:27.423 [2024-07-24 18:17:35.932226] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x13685e0 00:14:27.423 [2024-07-24 18:17:35.932232] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:27.423 [2024-07-24 18:17:35.932362] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x136f980 00:14:27.423 [2024-07-24 18:17:35.932459] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13685e0 00:14:27.423 [2024-07-24 18:17:35.932466] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x13685e0 00:14:27.423 [2024-07-24 18:17:35.932525] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.424 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:27.424 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:27.424 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:27.424 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:27.424 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:27.424 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:27.424 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.424 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.424 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.424 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.424 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.424 18:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.683 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.683 "name": "raid_bdev1", 00:14:27.683 "uuid": "ab2181f5-651e-455a-b214-f06f6d77d596", 00:14:27.683 "strip_size_kb": 0, 00:14:27.683 "state": "online", 00:14:27.683 "raid_level": "raid1", 00:14:27.683 "superblock": true, 00:14:27.683 "num_base_bdevs": 3, 00:14:27.683 "num_base_bdevs_discovered": 3, 00:14:27.683 "num_base_bdevs_operational": 3, 00:14:27.683 "base_bdevs_list": [ 00:14:27.683 { 00:14:27.683 "name": "pt1", 00:14:27.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:27.683 "is_configured": true, 00:14:27.683 "data_offset": 2048, 00:14:27.683 "data_size": 63488 00:14:27.683 }, 00:14:27.683 { 00:14:27.683 "name": "pt2", 00:14:27.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.683 "is_configured": true, 00:14:27.683 "data_offset": 2048, 00:14:27.683 "data_size": 63488 00:14:27.683 }, 00:14:27.683 { 00:14:27.683 "name": "pt3", 00:14:27.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.683 "is_configured": true, 00:14:27.683 "data_offset": 2048, 00:14:27.683 "data_size": 63488 00:14:27.683 } 00:14:27.683 ] 00:14:27.683 }' 00:14:27.683 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.683 18:17:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:28.252 [2024-07-24 18:17:36.741470] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:28.252 "name": "raid_bdev1", 00:14:28.252 "aliases": [ 00:14:28.252 "ab2181f5-651e-455a-b214-f06f6d77d596" 00:14:28.252 ], 00:14:28.252 "product_name": "Raid Volume", 00:14:28.252 "block_size": 512, 00:14:28.252 "num_blocks": 63488, 00:14:28.252 "uuid": "ab2181f5-651e-455a-b214-f06f6d77d596", 00:14:28.252 "assigned_rate_limits": { 00:14:28.252 "rw_ios_per_sec": 0, 00:14:28.252 "rw_mbytes_per_sec": 0, 00:14:28.252 "r_mbytes_per_sec": 0, 00:14:28.252 "w_mbytes_per_sec": 0 00:14:28.252 }, 00:14:28.252 "claimed": false, 00:14:28.252 "zoned": false, 00:14:28.252 "supported_io_types": { 00:14:28.252 "read": true, 00:14:28.252 "write": true, 00:14:28.252 "unmap": false, 00:14:28.252 "flush": false, 00:14:28.252 "reset": true, 00:14:28.252 "nvme_admin": false, 00:14:28.252 "nvme_io": false, 00:14:28.252 "nvme_io_md": false, 00:14:28.252 "write_zeroes": true, 00:14:28.252 "zcopy": false, 00:14:28.252 "get_zone_info": false, 00:14:28.252 "zone_management": false, 00:14:28.252 "zone_append": false, 00:14:28.252 "compare": false, 00:14:28.252 "compare_and_write": false, 00:14:28.252 "abort": false, 00:14:28.252 "seek_hole": false, 00:14:28.252 "seek_data": false, 00:14:28.252 "copy": false, 00:14:28.252 "nvme_iov_md": false 00:14:28.252 }, 00:14:28.252 "memory_domains": [ 00:14:28.252 { 00:14:28.252 "dma_device_id": "system", 00:14:28.252 "dma_device_type": 1 00:14:28.252 }, 00:14:28.252 { 00:14:28.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.252 "dma_device_type": 2 00:14:28.252 }, 00:14:28.252 { 00:14:28.252 "dma_device_id": "system", 00:14:28.252 "dma_device_type": 1 00:14:28.252 }, 00:14:28.252 { 00:14:28.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.252 "dma_device_type": 2 00:14:28.252 }, 00:14:28.252 { 00:14:28.252 "dma_device_id": "system", 00:14:28.252 "dma_device_type": 1 00:14:28.252 }, 00:14:28.252 { 00:14:28.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.252 "dma_device_type": 2 00:14:28.252 } 00:14:28.252 ], 00:14:28.252 "driver_specific": { 00:14:28.252 "raid": { 00:14:28.252 "uuid": "ab2181f5-651e-455a-b214-f06f6d77d596", 00:14:28.252 "strip_size_kb": 0, 00:14:28.252 "state": "online", 00:14:28.252 "raid_level": "raid1", 00:14:28.252 "superblock": true, 00:14:28.252 "num_base_bdevs": 3, 00:14:28.252 "num_base_bdevs_discovered": 3, 00:14:28.252 "num_base_bdevs_operational": 3, 00:14:28.252 "base_bdevs_list": [ 00:14:28.252 { 00:14:28.252 "name": "pt1", 00:14:28.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.252 "is_configured": true, 00:14:28.252 "data_offset": 2048, 00:14:28.252 "data_size": 63488 00:14:28.252 }, 00:14:28.252 { 00:14:28.252 "name": "pt2", 00:14:28.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.252 "is_configured": true, 00:14:28.252 "data_offset": 2048, 00:14:28.252 "data_size": 63488 00:14:28.252 }, 00:14:28.252 { 00:14:28.252 "name": "pt3", 00:14:28.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.252 "is_configured": true, 00:14:28.252 "data_offset": 2048, 00:14:28.252 "data_size": 63488 00:14:28.252 } 00:14:28.252 ] 00:14:28.252 } 00:14:28.252 } 00:14:28.252 }' 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:28.252 pt2 00:14:28.252 pt3' 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:28.252 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:28.512 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:28.512 "name": "pt1", 00:14:28.512 "aliases": [ 00:14:28.512 "00000000-0000-0000-0000-000000000001" 00:14:28.512 ], 00:14:28.512 "product_name": "passthru", 00:14:28.512 "block_size": 512, 00:14:28.512 "num_blocks": 65536, 00:14:28.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.512 "assigned_rate_limits": { 00:14:28.512 "rw_ios_per_sec": 0, 00:14:28.512 "rw_mbytes_per_sec": 0, 00:14:28.512 "r_mbytes_per_sec": 0, 00:14:28.512 "w_mbytes_per_sec": 0 00:14:28.512 }, 00:14:28.512 "claimed": true, 00:14:28.512 "claim_type": "exclusive_write", 00:14:28.512 "zoned": false, 00:14:28.512 "supported_io_types": { 00:14:28.512 "read": true, 00:14:28.512 "write": true, 00:14:28.512 "unmap": true, 00:14:28.512 "flush": true, 00:14:28.512 "reset": true, 00:14:28.512 "nvme_admin": false, 00:14:28.512 "nvme_io": false, 00:14:28.512 "nvme_io_md": false, 00:14:28.512 "write_zeroes": true, 00:14:28.512 "zcopy": true, 00:14:28.512 "get_zone_info": false, 00:14:28.512 "zone_management": false, 00:14:28.512 "zone_append": false, 00:14:28.512 "compare": false, 00:14:28.512 "compare_and_write": false, 00:14:28.512 "abort": true, 00:14:28.512 "seek_hole": false, 00:14:28.512 "seek_data": false, 00:14:28.512 "copy": true, 00:14:28.512 "nvme_iov_md": false 00:14:28.512 }, 00:14:28.512 "memory_domains": [ 00:14:28.512 { 00:14:28.512 "dma_device_id": "system", 00:14:28.512 "dma_device_type": 1 00:14:28.512 }, 00:14:28.512 { 00:14:28.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.512 "dma_device_type": 2 00:14:28.512 } 00:14:28.512 ], 00:14:28.512 "driver_specific": { 00:14:28.512 "passthru": { 00:14:28.512 "name": "pt1", 00:14:28.512 "base_bdev_name": "malloc1" 00:14:28.512 } 00:14:28.512 } 00:14:28.512 }' 00:14:28.512 18:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:28.512 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:28.512 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:28.512 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:28.512 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:28.771 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:28.772 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.772 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.772 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:28.772 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:28.772 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:28.772 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:28.772 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:28.772 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:28.772 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:29.031 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:29.031 "name": "pt2", 00:14:29.031 "aliases": [ 00:14:29.031 "00000000-0000-0000-0000-000000000002" 00:14:29.031 ], 00:14:29.031 "product_name": "passthru", 00:14:29.031 "block_size": 512, 00:14:29.031 "num_blocks": 65536, 00:14:29.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.031 "assigned_rate_limits": { 00:14:29.031 "rw_ios_per_sec": 0, 00:14:29.031 "rw_mbytes_per_sec": 0, 00:14:29.031 "r_mbytes_per_sec": 0, 00:14:29.031 "w_mbytes_per_sec": 0 00:14:29.031 }, 00:14:29.031 "claimed": true, 00:14:29.031 "claim_type": "exclusive_write", 00:14:29.031 "zoned": false, 00:14:29.031 "supported_io_types": { 00:14:29.031 "read": true, 00:14:29.031 "write": true, 00:14:29.031 "unmap": true, 00:14:29.031 "flush": true, 00:14:29.031 "reset": true, 00:14:29.031 "nvme_admin": false, 00:14:29.031 "nvme_io": false, 00:14:29.031 "nvme_io_md": false, 00:14:29.031 "write_zeroes": true, 00:14:29.031 "zcopy": true, 00:14:29.031 "get_zone_info": false, 00:14:29.031 "zone_management": false, 00:14:29.031 "zone_append": false, 00:14:29.031 "compare": false, 00:14:29.031 "compare_and_write": false, 00:14:29.031 "abort": true, 00:14:29.031 "seek_hole": false, 00:14:29.031 "seek_data": false, 00:14:29.031 "copy": true, 00:14:29.031 "nvme_iov_md": false 00:14:29.031 }, 00:14:29.031 "memory_domains": [ 00:14:29.031 { 00:14:29.031 "dma_device_id": "system", 00:14:29.031 "dma_device_type": 1 00:14:29.031 }, 00:14:29.031 { 00:14:29.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.031 "dma_device_type": 2 00:14:29.031 } 00:14:29.031 ], 00:14:29.031 "driver_specific": { 00:14:29.031 "passthru": { 00:14:29.031 "name": "pt2", 00:14:29.031 "base_bdev_name": "malloc2" 00:14:29.031 } 00:14:29.031 } 00:14:29.031 }' 00:14:29.031 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.031 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.031 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:29.031 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.031 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.031 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:29.031 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:29.031 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:29.291 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:29.291 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.291 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.291 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:29.291 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:29.291 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:29.291 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:29.550 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:29.551 "name": "pt3", 00:14:29.551 "aliases": [ 00:14:29.551 "00000000-0000-0000-0000-000000000003" 00:14:29.551 ], 00:14:29.551 "product_name": "passthru", 00:14:29.551 "block_size": 512, 00:14:29.551 "num_blocks": 65536, 00:14:29.551 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.551 "assigned_rate_limits": { 00:14:29.551 "rw_ios_per_sec": 0, 00:14:29.551 "rw_mbytes_per_sec": 0, 00:14:29.551 "r_mbytes_per_sec": 0, 00:14:29.551 "w_mbytes_per_sec": 0 00:14:29.551 }, 00:14:29.551 "claimed": true, 00:14:29.551 "claim_type": "exclusive_write", 00:14:29.551 "zoned": false, 00:14:29.551 "supported_io_types": { 00:14:29.551 "read": true, 00:14:29.551 "write": true, 00:14:29.551 "unmap": true, 00:14:29.551 "flush": true, 00:14:29.551 "reset": true, 00:14:29.551 "nvme_admin": false, 00:14:29.551 "nvme_io": false, 00:14:29.551 "nvme_io_md": false, 00:14:29.551 "write_zeroes": true, 00:14:29.551 "zcopy": true, 00:14:29.551 "get_zone_info": false, 00:14:29.551 "zone_management": false, 00:14:29.551 "zone_append": false, 00:14:29.551 "compare": false, 00:14:29.551 "compare_and_write": false, 00:14:29.551 "abort": true, 00:14:29.551 "seek_hole": false, 00:14:29.551 "seek_data": false, 00:14:29.551 "copy": true, 00:14:29.551 "nvme_iov_md": false 00:14:29.551 }, 00:14:29.551 "memory_domains": [ 00:14:29.551 { 00:14:29.551 "dma_device_id": "system", 00:14:29.551 "dma_device_type": 1 00:14:29.551 }, 00:14:29.551 { 00:14:29.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.551 "dma_device_type": 2 00:14:29.551 } 00:14:29.551 ], 00:14:29.551 "driver_specific": { 00:14:29.551 "passthru": { 00:14:29.551 "name": "pt3", 00:14:29.551 "base_bdev_name": "malloc3" 00:14:29.551 } 00:14:29.551 } 00:14:29.551 }' 00:14:29.551 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.551 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.551 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:29.551 18:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.551 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.551 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:29.551 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:29.551 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:29.551 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:29.551 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.810 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.810 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:29.810 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:29.810 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:29.810 [2024-07-24 18:17:38.357618] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.810 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=ab2181f5-651e-455a-b214-f06f6d77d596 00:14:29.810 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z ab2181f5-651e-455a-b214-f06f6d77d596 ']' 00:14:29.810 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:30.070 [2024-07-24 18:17:38.525882] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.070 [2024-07-24 18:17:38.525893] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.070 [2024-07-24 18:17:38.525927] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.070 [2024-07-24 18:17:38.525973] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.070 [2024-07-24 18:17:38.525980] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13685e0 name raid_bdev1, state offline 00:14:30.070 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:30.070 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.329 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:30.329 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:30.329 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:30.329 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:30.329 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:30.329 18:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:30.588 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:30.588 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:30.848 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:31.107 [2024-07-24 18:17:39.536465] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:31.107 [2024-07-24 18:17:39.537448] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:31.107 [2024-07-24 18:17:39.537478] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:31.107 [2024-07-24 18:17:39.537511] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:31.107 [2024-07-24 18:17:39.537540] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:31.107 [2024-07-24 18:17:39.537555] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:31.107 [2024-07-24 18:17:39.537567] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.107 [2024-07-24 18:17:39.537573] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1513730 name raid_bdev1, state configuring 00:14:31.107 request: 00:14:31.107 { 00:14:31.107 "name": "raid_bdev1", 00:14:31.107 "raid_level": "raid1", 00:14:31.107 "base_bdevs": [ 00:14:31.107 "malloc1", 00:14:31.107 "malloc2", 00:14:31.107 "malloc3" 00:14:31.107 ], 00:14:31.107 "superblock": false, 00:14:31.107 "method": "bdev_raid_create", 00:14:31.107 "req_id": 1 00:14:31.107 } 00:14:31.107 Got JSON-RPC error response 00:14:31.107 response: 00:14:31.107 { 00:14:31.107 "code": -17, 00:14:31.107 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:31.107 } 00:14:31.107 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:31.107 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:31.107 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:31.107 18:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:31.107 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.107 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:31.367 [2024-07-24 18:17:39.865281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:31.367 [2024-07-24 18:17:39.865310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.367 [2024-07-24 18:17:39.865325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x136fee0 00:14:31.367 [2024-07-24 18:17:39.865333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.367 [2024-07-24 18:17:39.866452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.367 [2024-07-24 18:17:39.866475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:31.367 [2024-07-24 18:17:39.866520] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:31.367 [2024-07-24 18:17:39.866538] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:31.367 pt1 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.367 18:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.626 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:31.626 "name": "raid_bdev1", 00:14:31.626 "uuid": "ab2181f5-651e-455a-b214-f06f6d77d596", 00:14:31.626 "strip_size_kb": 0, 00:14:31.626 "state": "configuring", 00:14:31.626 "raid_level": "raid1", 00:14:31.626 "superblock": true, 00:14:31.626 "num_base_bdevs": 3, 00:14:31.626 "num_base_bdevs_discovered": 1, 00:14:31.626 "num_base_bdevs_operational": 3, 00:14:31.626 "base_bdevs_list": [ 00:14:31.626 { 00:14:31.626 "name": "pt1", 00:14:31.626 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:31.626 "is_configured": true, 00:14:31.626 "data_offset": 2048, 00:14:31.626 "data_size": 63488 00:14:31.626 }, 00:14:31.626 { 00:14:31.626 "name": null, 00:14:31.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.626 "is_configured": false, 00:14:31.626 "data_offset": 2048, 00:14:31.626 "data_size": 63488 00:14:31.626 }, 00:14:31.626 { 00:14:31.626 "name": null, 00:14:31.626 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.626 "is_configured": false, 00:14:31.626 "data_offset": 2048, 00:14:31.626 "data_size": 63488 00:14:31.626 } 00:14:31.626 ] 00:14:31.626 }' 00:14:31.626 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:31.626 18:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.195 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:14:32.195 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:32.195 [2024-07-24 18:17:40.663366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:32.195 [2024-07-24 18:17:40.663401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.195 [2024-07-24 18:17:40.663413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1367150 00:14:32.195 [2024-07-24 18:17:40.663422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.195 [2024-07-24 18:17:40.663696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.195 [2024-07-24 18:17:40.663710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:32.195 [2024-07-24 18:17:40.663754] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:32.195 [2024-07-24 18:17:40.663768] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:32.195 pt2 00:14:32.195 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:32.454 [2024-07-24 18:17:40.819792] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:32.454 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:32.454 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:32.454 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:32.454 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:32.454 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:32.454 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:32.454 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.454 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.455 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.455 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.455 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.455 18:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.455 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:32.455 "name": "raid_bdev1", 00:14:32.455 "uuid": "ab2181f5-651e-455a-b214-f06f6d77d596", 00:14:32.455 "strip_size_kb": 0, 00:14:32.455 "state": "configuring", 00:14:32.455 "raid_level": "raid1", 00:14:32.455 "superblock": true, 00:14:32.455 "num_base_bdevs": 3, 00:14:32.455 "num_base_bdevs_discovered": 1, 00:14:32.455 "num_base_bdevs_operational": 3, 00:14:32.455 "base_bdevs_list": [ 00:14:32.455 { 00:14:32.455 "name": "pt1", 00:14:32.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:32.455 "is_configured": true, 00:14:32.455 "data_offset": 2048, 00:14:32.455 "data_size": 63488 00:14:32.455 }, 00:14:32.455 { 00:14:32.455 "name": null, 00:14:32.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.455 "is_configured": false, 00:14:32.455 "data_offset": 2048, 00:14:32.455 "data_size": 63488 00:14:32.455 }, 00:14:32.455 { 00:14:32.455 "name": null, 00:14:32.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.455 "is_configured": false, 00:14:32.455 "data_offset": 2048, 00:14:32.455 "data_size": 63488 00:14:32.455 } 00:14:32.455 ] 00:14:32.455 }' 00:14:32.455 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:32.455 18:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.023 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:33.023 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:33.023 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:33.283 [2024-07-24 18:17:41.657963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:33.283 [2024-07-24 18:17:41.658002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.283 [2024-07-24 18:17:41.658016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1370150 00:14:33.283 [2024-07-24 18:17:41.658024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.283 [2024-07-24 18:17:41.658269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.283 [2024-07-24 18:17:41.658282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:33.283 [2024-07-24 18:17:41.658325] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:33.283 [2024-07-24 18:17:41.658338] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:33.283 pt2 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:33.283 [2024-07-24 18:17:41.834416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:33.283 [2024-07-24 18:17:41.834440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.283 [2024-07-24 18:17:41.834451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1366e00 00:14:33.283 [2024-07-24 18:17:41.834458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.283 [2024-07-24 18:17:41.834664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.283 [2024-07-24 18:17:41.834677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:33.283 [2024-07-24 18:17:41.834713] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:33.283 [2024-07-24 18:17:41.834726] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:33.283 [2024-07-24 18:17:41.834795] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x150a340 00:14:33.283 [2024-07-24 18:17:41.834802] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:33.283 [2024-07-24 18:17:41.834911] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1369d50 00:14:33.283 [2024-07-24 18:17:41.834997] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x150a340 00:14:33.283 [2024-07-24 18:17:41.835004] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x150a340 00:14:33.283 [2024-07-24 18:17:41.835068] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.283 pt3 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.283 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.284 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.284 18:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.543 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.543 "name": "raid_bdev1", 00:14:33.543 "uuid": "ab2181f5-651e-455a-b214-f06f6d77d596", 00:14:33.543 "strip_size_kb": 0, 00:14:33.543 "state": "online", 00:14:33.543 "raid_level": "raid1", 00:14:33.543 "superblock": true, 00:14:33.543 "num_base_bdevs": 3, 00:14:33.543 "num_base_bdevs_discovered": 3, 00:14:33.543 "num_base_bdevs_operational": 3, 00:14:33.543 "base_bdevs_list": [ 00:14:33.543 { 00:14:33.543 "name": "pt1", 00:14:33.543 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:33.543 "is_configured": true, 00:14:33.543 "data_offset": 2048, 00:14:33.543 "data_size": 63488 00:14:33.543 }, 00:14:33.543 { 00:14:33.543 "name": "pt2", 00:14:33.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.543 "is_configured": true, 00:14:33.543 "data_offset": 2048, 00:14:33.543 "data_size": 63488 00:14:33.543 }, 00:14:33.543 { 00:14:33.543 "name": "pt3", 00:14:33.543 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.543 "is_configured": true, 00:14:33.543 "data_offset": 2048, 00:14:33.543 "data_size": 63488 00:14:33.543 } 00:14:33.543 ] 00:14:33.543 }' 00:14:33.543 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.543 18:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.112 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:34.113 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:34.113 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:34.113 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:34.113 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:34.113 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:34.113 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:34.113 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:34.113 [2024-07-24 18:17:42.700827] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.372 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:34.372 "name": "raid_bdev1", 00:14:34.372 "aliases": [ 00:14:34.372 "ab2181f5-651e-455a-b214-f06f6d77d596" 00:14:34.372 ], 00:14:34.372 "product_name": "Raid Volume", 00:14:34.372 "block_size": 512, 00:14:34.372 "num_blocks": 63488, 00:14:34.372 "uuid": "ab2181f5-651e-455a-b214-f06f6d77d596", 00:14:34.372 "assigned_rate_limits": { 00:14:34.372 "rw_ios_per_sec": 0, 00:14:34.372 "rw_mbytes_per_sec": 0, 00:14:34.372 "r_mbytes_per_sec": 0, 00:14:34.372 "w_mbytes_per_sec": 0 00:14:34.372 }, 00:14:34.372 "claimed": false, 00:14:34.372 "zoned": false, 00:14:34.372 "supported_io_types": { 00:14:34.372 "read": true, 00:14:34.372 "write": true, 00:14:34.372 "unmap": false, 00:14:34.372 "flush": false, 00:14:34.372 "reset": true, 00:14:34.372 "nvme_admin": false, 00:14:34.372 "nvme_io": false, 00:14:34.372 "nvme_io_md": false, 00:14:34.372 "write_zeroes": true, 00:14:34.372 "zcopy": false, 00:14:34.372 "get_zone_info": false, 00:14:34.372 "zone_management": false, 00:14:34.372 "zone_append": false, 00:14:34.372 "compare": false, 00:14:34.372 "compare_and_write": false, 00:14:34.372 "abort": false, 00:14:34.372 "seek_hole": false, 00:14:34.372 "seek_data": false, 00:14:34.372 "copy": false, 00:14:34.372 "nvme_iov_md": false 00:14:34.372 }, 00:14:34.372 "memory_domains": [ 00:14:34.372 { 00:14:34.372 "dma_device_id": "system", 00:14:34.372 "dma_device_type": 1 00:14:34.372 }, 00:14:34.372 { 00:14:34.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.372 "dma_device_type": 2 00:14:34.372 }, 00:14:34.372 { 00:14:34.372 "dma_device_id": "system", 00:14:34.372 "dma_device_type": 1 00:14:34.372 }, 00:14:34.372 { 00:14:34.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.372 "dma_device_type": 2 00:14:34.372 }, 00:14:34.372 { 00:14:34.372 "dma_device_id": "system", 00:14:34.372 "dma_device_type": 1 00:14:34.372 }, 00:14:34.372 { 00:14:34.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.372 "dma_device_type": 2 00:14:34.372 } 00:14:34.372 ], 00:14:34.372 "driver_specific": { 00:14:34.372 "raid": { 00:14:34.372 "uuid": "ab2181f5-651e-455a-b214-f06f6d77d596", 00:14:34.372 "strip_size_kb": 0, 00:14:34.372 "state": "online", 00:14:34.372 "raid_level": "raid1", 00:14:34.372 "superblock": true, 00:14:34.372 "num_base_bdevs": 3, 00:14:34.372 "num_base_bdevs_discovered": 3, 00:14:34.372 "num_base_bdevs_operational": 3, 00:14:34.372 "base_bdevs_list": [ 00:14:34.372 { 00:14:34.372 "name": "pt1", 00:14:34.372 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:34.372 "is_configured": true, 00:14:34.373 "data_offset": 2048, 00:14:34.373 "data_size": 63488 00:14:34.373 }, 00:14:34.373 { 00:14:34.373 "name": "pt2", 00:14:34.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.373 "is_configured": true, 00:14:34.373 "data_offset": 2048, 00:14:34.373 "data_size": 63488 00:14:34.373 }, 00:14:34.373 { 00:14:34.373 "name": "pt3", 00:14:34.373 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:34.373 "is_configured": true, 00:14:34.373 "data_offset": 2048, 00:14:34.373 "data_size": 63488 00:14:34.373 } 00:14:34.373 ] 00:14:34.373 } 00:14:34.373 } 00:14:34.373 }' 00:14:34.373 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:34.373 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:34.373 pt2 00:14:34.373 pt3' 00:14:34.373 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:34.373 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:34.373 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:34.373 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:34.373 "name": "pt1", 00:14:34.373 "aliases": [ 00:14:34.373 "00000000-0000-0000-0000-000000000001" 00:14:34.373 ], 00:14:34.373 "product_name": "passthru", 00:14:34.373 "block_size": 512, 00:14:34.373 "num_blocks": 65536, 00:14:34.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:34.373 "assigned_rate_limits": { 00:14:34.373 "rw_ios_per_sec": 0, 00:14:34.373 "rw_mbytes_per_sec": 0, 00:14:34.373 "r_mbytes_per_sec": 0, 00:14:34.373 "w_mbytes_per_sec": 0 00:14:34.373 }, 00:14:34.373 "claimed": true, 00:14:34.373 "claim_type": "exclusive_write", 00:14:34.373 "zoned": false, 00:14:34.373 "supported_io_types": { 00:14:34.373 "read": true, 00:14:34.373 "write": true, 00:14:34.373 "unmap": true, 00:14:34.373 "flush": true, 00:14:34.373 "reset": true, 00:14:34.373 "nvme_admin": false, 00:14:34.373 "nvme_io": false, 00:14:34.373 "nvme_io_md": false, 00:14:34.373 "write_zeroes": true, 00:14:34.373 "zcopy": true, 00:14:34.373 "get_zone_info": false, 00:14:34.373 "zone_management": false, 00:14:34.373 "zone_append": false, 00:14:34.373 "compare": false, 00:14:34.373 "compare_and_write": false, 00:14:34.373 "abort": true, 00:14:34.373 "seek_hole": false, 00:14:34.373 "seek_data": false, 00:14:34.373 "copy": true, 00:14:34.373 "nvme_iov_md": false 00:14:34.373 }, 00:14:34.373 "memory_domains": [ 00:14:34.373 { 00:14:34.373 "dma_device_id": "system", 00:14:34.373 "dma_device_type": 1 00:14:34.373 }, 00:14:34.373 { 00:14:34.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.373 "dma_device_type": 2 00:14:34.373 } 00:14:34.373 ], 00:14:34.373 "driver_specific": { 00:14:34.373 "passthru": { 00:14:34.373 "name": "pt1", 00:14:34.373 "base_bdev_name": "malloc1" 00:14:34.373 } 00:14:34.373 } 00:14:34.373 }' 00:14:34.373 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.373 18:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.632 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:34.632 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.632 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.632 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:34.632 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.632 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.632 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:34.632 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.632 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.899 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:34.899 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:34.899 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:34.899 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:34.899 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:34.899 "name": "pt2", 00:14:34.899 "aliases": [ 00:14:34.899 "00000000-0000-0000-0000-000000000002" 00:14:34.899 ], 00:14:34.899 "product_name": "passthru", 00:14:34.899 "block_size": 512, 00:14:34.899 "num_blocks": 65536, 00:14:34.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.899 "assigned_rate_limits": { 00:14:34.899 "rw_ios_per_sec": 0, 00:14:34.899 "rw_mbytes_per_sec": 0, 00:14:34.899 "r_mbytes_per_sec": 0, 00:14:34.899 "w_mbytes_per_sec": 0 00:14:34.899 }, 00:14:34.899 "claimed": true, 00:14:34.899 "claim_type": "exclusive_write", 00:14:34.899 "zoned": false, 00:14:34.899 "supported_io_types": { 00:14:34.899 "read": true, 00:14:34.899 "write": true, 00:14:34.899 "unmap": true, 00:14:34.899 "flush": true, 00:14:34.899 "reset": true, 00:14:34.899 "nvme_admin": false, 00:14:34.899 "nvme_io": false, 00:14:34.899 "nvme_io_md": false, 00:14:34.899 "write_zeroes": true, 00:14:34.899 "zcopy": true, 00:14:34.899 "get_zone_info": false, 00:14:34.899 "zone_management": false, 00:14:34.899 "zone_append": false, 00:14:34.899 "compare": false, 00:14:34.899 "compare_and_write": false, 00:14:34.899 "abort": true, 00:14:34.899 "seek_hole": false, 00:14:34.899 "seek_data": false, 00:14:34.899 "copy": true, 00:14:34.899 "nvme_iov_md": false 00:14:34.899 }, 00:14:34.899 "memory_domains": [ 00:14:34.899 { 00:14:34.899 "dma_device_id": "system", 00:14:34.899 "dma_device_type": 1 00:14:34.899 }, 00:14:34.899 { 00:14:34.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.899 "dma_device_type": 2 00:14:34.899 } 00:14:34.899 ], 00:14:34.899 "driver_specific": { 00:14:34.899 "passthru": { 00:14:34.899 "name": "pt2", 00:14:34.899 "base_bdev_name": "malloc2" 00:14:34.899 } 00:14:34.899 } 00:14:34.899 }' 00:14:34.899 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.899 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.162 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:35.162 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.162 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.162 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:35.162 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.162 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.162 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:35.162 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.162 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.162 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:35.162 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:35.162 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:35.162 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:35.421 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:35.421 "name": "pt3", 00:14:35.421 "aliases": [ 00:14:35.421 "00000000-0000-0000-0000-000000000003" 00:14:35.421 ], 00:14:35.421 "product_name": "passthru", 00:14:35.421 "block_size": 512, 00:14:35.421 "num_blocks": 65536, 00:14:35.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.421 "assigned_rate_limits": { 00:14:35.421 "rw_ios_per_sec": 0, 00:14:35.421 "rw_mbytes_per_sec": 0, 00:14:35.421 "r_mbytes_per_sec": 0, 00:14:35.421 "w_mbytes_per_sec": 0 00:14:35.421 }, 00:14:35.421 "claimed": true, 00:14:35.421 "claim_type": "exclusive_write", 00:14:35.421 "zoned": false, 00:14:35.421 "supported_io_types": { 00:14:35.421 "read": true, 00:14:35.421 "write": true, 00:14:35.421 "unmap": true, 00:14:35.421 "flush": true, 00:14:35.421 "reset": true, 00:14:35.421 "nvme_admin": false, 00:14:35.421 "nvme_io": false, 00:14:35.421 "nvme_io_md": false, 00:14:35.421 "write_zeroes": true, 00:14:35.421 "zcopy": true, 00:14:35.421 "get_zone_info": false, 00:14:35.421 "zone_management": false, 00:14:35.421 "zone_append": false, 00:14:35.421 "compare": false, 00:14:35.421 "compare_and_write": false, 00:14:35.421 "abort": true, 00:14:35.421 "seek_hole": false, 00:14:35.421 "seek_data": false, 00:14:35.421 "copy": true, 00:14:35.421 "nvme_iov_md": false 00:14:35.421 }, 00:14:35.421 "memory_domains": [ 00:14:35.421 { 00:14:35.421 "dma_device_id": "system", 00:14:35.421 "dma_device_type": 1 00:14:35.421 }, 00:14:35.421 { 00:14:35.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.421 "dma_device_type": 2 00:14:35.421 } 00:14:35.421 ], 00:14:35.421 "driver_specific": { 00:14:35.421 "passthru": { 00:14:35.421 "name": "pt3", 00:14:35.421 "base_bdev_name": "malloc3" 00:14:35.421 } 00:14:35.421 } 00:14:35.421 }' 00:14:35.421 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.421 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.421 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:35.421 18:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.680 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.680 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:35.680 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.680 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.680 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:35.680 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.680 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.680 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:35.680 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:35.680 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:35.940 [2024-07-24 18:17:44.393185] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.940 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' ab2181f5-651e-455a-b214-f06f6d77d596 '!=' ab2181f5-651e-455a-b214-f06f6d77d596 ']' 00:14:35.940 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:14:35.940 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:35.940 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:35.940 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:36.199 [2024-07-24 18:17:44.565481] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:36.199 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:36.199 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:36.199 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:36.199 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:36.199 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:36.199 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:36.199 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.199 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.199 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.199 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.199 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.199 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.199 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.199 "name": "raid_bdev1", 00:14:36.199 "uuid": "ab2181f5-651e-455a-b214-f06f6d77d596", 00:14:36.199 "strip_size_kb": 0, 00:14:36.199 "state": "online", 00:14:36.199 "raid_level": "raid1", 00:14:36.199 "superblock": true, 00:14:36.199 "num_base_bdevs": 3, 00:14:36.199 "num_base_bdevs_discovered": 2, 00:14:36.199 "num_base_bdevs_operational": 2, 00:14:36.199 "base_bdevs_list": [ 00:14:36.199 { 00:14:36.199 "name": null, 00:14:36.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.199 "is_configured": false, 00:14:36.199 "data_offset": 2048, 00:14:36.199 "data_size": 63488 00:14:36.199 }, 00:14:36.199 { 00:14:36.199 "name": "pt2", 00:14:36.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.199 "is_configured": true, 00:14:36.199 "data_offset": 2048, 00:14:36.199 "data_size": 63488 00:14:36.199 }, 00:14:36.199 { 00:14:36.199 "name": "pt3", 00:14:36.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.199 "is_configured": true, 00:14:36.199 "data_offset": 2048, 00:14:36.199 "data_size": 63488 00:14:36.199 } 00:14:36.199 ] 00:14:36.199 }' 00:14:36.200 18:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.200 18:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.767 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:37.027 [2024-07-24 18:17:45.387581] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.027 [2024-07-24 18:17:45.387600] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.027 [2024-07-24 18:17:45.387640] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.027 [2024-07-24 18:17:45.387676] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.027 [2024-07-24 18:17:45.387683] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x150a340 name raid_bdev1, state offline 00:14:37.027 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.027 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:14:37.027 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:14:37.027 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:14:37.027 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:14:37.027 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:37.027 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:37.286 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:37.287 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:37.287 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:37.546 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:37.546 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:37.546 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:14:37.546 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:37.546 18:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:37.546 [2024-07-24 18:17:46.057292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:37.546 [2024-07-24 18:17:46.057326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.546 [2024-07-24 18:17:46.057338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1367a50 00:14:37.546 [2024-07-24 18:17:46.057346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.546 [2024-07-24 18:17:46.058487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.546 [2024-07-24 18:17:46.058510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:37.546 [2024-07-24 18:17:46.058556] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:37.546 [2024-07-24 18:17:46.058574] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.546 pt2 00:14:37.546 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:37.546 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:37.546 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:37.546 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:37.546 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:37.546 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:37.546 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.546 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.546 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.546 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.546 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.546 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.806 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.806 "name": "raid_bdev1", 00:14:37.806 "uuid": "ab2181f5-651e-455a-b214-f06f6d77d596", 00:14:37.806 "strip_size_kb": 0, 00:14:37.806 "state": "configuring", 00:14:37.806 "raid_level": "raid1", 00:14:37.806 "superblock": true, 00:14:37.806 "num_base_bdevs": 3, 00:14:37.806 "num_base_bdevs_discovered": 1, 00:14:37.806 "num_base_bdevs_operational": 2, 00:14:37.806 "base_bdevs_list": [ 00:14:37.806 { 00:14:37.806 "name": null, 00:14:37.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.806 "is_configured": false, 00:14:37.806 "data_offset": 2048, 00:14:37.806 "data_size": 63488 00:14:37.806 }, 00:14:37.806 { 00:14:37.806 "name": "pt2", 00:14:37.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.806 "is_configured": true, 00:14:37.806 "data_offset": 2048, 00:14:37.806 "data_size": 63488 00:14:37.806 }, 00:14:37.806 { 00:14:37.806 "name": null, 00:14:37.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.806 "is_configured": false, 00:14:37.806 "data_offset": 2048, 00:14:37.806 "data_size": 63488 00:14:37.806 } 00:14:37.806 ] 00:14:37.806 }' 00:14:37.806 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.806 18:17:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:38.375 [2024-07-24 18:17:46.891444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:38.375 [2024-07-24 18:17:46.891476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.375 [2024-07-24 18:17:46.891489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1366600 00:14:38.375 [2024-07-24 18:17:46.891497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.375 [2024-07-24 18:17:46.891747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.375 [2024-07-24 18:17:46.891760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:38.375 [2024-07-24 18:17:46.891801] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:38.375 [2024-07-24 18:17:46.891814] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:38.375 [2024-07-24 18:17:46.891881] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1508400 00:14:38.375 [2024-07-24 18:17:46.891888] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:38.375 [2024-07-24 18:17:46.891994] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1508e10 00:14:38.375 [2024-07-24 18:17:46.892078] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1508400 00:14:38.375 [2024-07-24 18:17:46.892085] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1508400 00:14:38.375 [2024-07-24 18:17:46.892149] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.375 pt3 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.375 18:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.634 18:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:38.634 "name": "raid_bdev1", 00:14:38.634 "uuid": "ab2181f5-651e-455a-b214-f06f6d77d596", 00:14:38.634 "strip_size_kb": 0, 00:14:38.634 "state": "online", 00:14:38.634 "raid_level": "raid1", 00:14:38.634 "superblock": true, 00:14:38.634 "num_base_bdevs": 3, 00:14:38.634 "num_base_bdevs_discovered": 2, 00:14:38.634 "num_base_bdevs_operational": 2, 00:14:38.634 "base_bdevs_list": [ 00:14:38.634 { 00:14:38.634 "name": null, 00:14:38.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.634 "is_configured": false, 00:14:38.634 "data_offset": 2048, 00:14:38.634 "data_size": 63488 00:14:38.634 }, 00:14:38.634 { 00:14:38.634 "name": "pt2", 00:14:38.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.634 "is_configured": true, 00:14:38.634 "data_offset": 2048, 00:14:38.634 "data_size": 63488 00:14:38.634 }, 00:14:38.634 { 00:14:38.634 "name": "pt3", 00:14:38.634 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.634 "is_configured": true, 00:14:38.634 "data_offset": 2048, 00:14:38.634 "data_size": 63488 00:14:38.634 } 00:14:38.634 ] 00:14:38.634 }' 00:14:38.634 18:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:38.634 18:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.203 18:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:39.203 [2024-07-24 18:17:47.717581] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.203 [2024-07-24 18:17:47.717598] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.203 [2024-07-24 18:17:47.717635] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.203 [2024-07-24 18:17:47.717669] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.203 [2024-07-24 18:17:47.717675] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1508400 name raid_bdev1, state offline 00:14:39.203 18:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.203 18:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:14:39.462 18:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:14:39.462 18:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:14:39.462 18:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:14:39.462 18:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:14:39.462 18:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:39.722 [2024-07-24 18:17:48.230892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:39.722 [2024-07-24 18:17:48.230923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.722 [2024-07-24 18:17:48.230933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1366600 00:14:39.722 [2024-07-24 18:17:48.230941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.722 [2024-07-24 18:17:48.232069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.722 [2024-07-24 18:17:48.232090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:39.722 [2024-07-24 18:17:48.232134] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:39.722 [2024-07-24 18:17:48.232151] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:39.722 [2024-07-24 18:17:48.232215] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:39.722 [2024-07-24 18:17:48.232223] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.722 [2024-07-24 18:17:48.232232] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1508680 name raid_bdev1, state configuring 00:14:39.722 [2024-07-24 18:17:48.232247] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:39.722 pt1 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.722 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.981 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:39.981 "name": "raid_bdev1", 00:14:39.981 "uuid": "ab2181f5-651e-455a-b214-f06f6d77d596", 00:14:39.981 "strip_size_kb": 0, 00:14:39.981 "state": "configuring", 00:14:39.981 "raid_level": "raid1", 00:14:39.981 "superblock": true, 00:14:39.981 "num_base_bdevs": 3, 00:14:39.981 "num_base_bdevs_discovered": 1, 00:14:39.981 "num_base_bdevs_operational": 2, 00:14:39.981 "base_bdevs_list": [ 00:14:39.981 { 00:14:39.981 "name": null, 00:14:39.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.981 "is_configured": false, 00:14:39.981 "data_offset": 2048, 00:14:39.981 "data_size": 63488 00:14:39.981 }, 00:14:39.981 { 00:14:39.981 "name": "pt2", 00:14:39.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.981 "is_configured": true, 00:14:39.981 "data_offset": 2048, 00:14:39.981 "data_size": 63488 00:14:39.981 }, 00:14:39.981 { 00:14:39.981 "name": null, 00:14:39.981 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.981 "is_configured": false, 00:14:39.981 "data_offset": 2048, 00:14:39.981 "data_size": 63488 00:14:39.981 } 00:14:39.981 ] 00:14:39.981 }' 00:14:39.981 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:39.981 18:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.578 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:14:40.578 18:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:40.578 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:14:40.579 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:40.837 [2024-07-24 18:17:49.229473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:40.837 [2024-07-24 18:17:49.229511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.837 [2024-07-24 18:17:49.229524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1369800 00:14:40.837 [2024-07-24 18:17:49.229532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.837 [2024-07-24 18:17:49.229794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.838 [2024-07-24 18:17:49.229808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:40.838 [2024-07-24 18:17:49.229853] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:40.838 [2024-07-24 18:17:49.229867] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:40.838 [2024-07-24 18:17:49.229937] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x136a180 00:14:40.838 [2024-07-24 18:17:49.229944] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:40.838 [2024-07-24 18:17:49.230057] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1508e00 00:14:40.838 [2024-07-24 18:17:49.230152] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x136a180 00:14:40.838 [2024-07-24 18:17:49.230159] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x136a180 00:14:40.838 [2024-07-24 18:17:49.230226] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.838 pt3 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.838 "name": "raid_bdev1", 00:14:40.838 "uuid": "ab2181f5-651e-455a-b214-f06f6d77d596", 00:14:40.838 "strip_size_kb": 0, 00:14:40.838 "state": "online", 00:14:40.838 "raid_level": "raid1", 00:14:40.838 "superblock": true, 00:14:40.838 "num_base_bdevs": 3, 00:14:40.838 "num_base_bdevs_discovered": 2, 00:14:40.838 "num_base_bdevs_operational": 2, 00:14:40.838 "base_bdevs_list": [ 00:14:40.838 { 00:14:40.838 "name": null, 00:14:40.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.838 "is_configured": false, 00:14:40.838 "data_offset": 2048, 00:14:40.838 "data_size": 63488 00:14:40.838 }, 00:14:40.838 { 00:14:40.838 "name": "pt2", 00:14:40.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.838 "is_configured": true, 00:14:40.838 "data_offset": 2048, 00:14:40.838 "data_size": 63488 00:14:40.838 }, 00:14:40.838 { 00:14:40.838 "name": "pt3", 00:14:40.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.838 "is_configured": true, 00:14:40.838 "data_offset": 2048, 00:14:40.838 "data_size": 63488 00:14:40.838 } 00:14:40.838 ] 00:14:40.838 }' 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.838 18:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.406 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:41.406 18:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:41.665 18:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:14:41.665 18:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:41.665 18:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:14:41.665 [2024-07-24 18:17:50.248269] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.924 18:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' ab2181f5-651e-455a-b214-f06f6d77d596 '!=' ab2181f5-651e-455a-b214-f06f6d77d596 ']' 00:14:41.924 18:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2206137 00:14:41.924 18:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2206137 ']' 00:14:41.924 18:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2206137 00:14:41.924 18:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:41.924 18:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:41.924 18:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2206137 00:14:41.924 18:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:41.924 18:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:41.924 18:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2206137' 00:14:41.924 killing process with pid 2206137 00:14:41.924 18:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2206137 00:14:41.924 [2024-07-24 18:17:50.323842] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.924 [2024-07-24 18:17:50.323881] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.924 [2024-07-24 18:17:50.323918] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.925 [2024-07-24 18:17:50.323925] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x136a180 name raid_bdev1, state offline 00:14:41.925 18:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2206137 00:14:41.925 [2024-07-24 18:17:50.346526] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:41.925 18:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:41.925 00:14:41.925 real 0m16.603s 00:14:41.925 user 0m30.146s 00:14:41.925 sys 0m3.204s 00:14:41.925 18:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:41.925 18:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.925 ************************************ 00:14:41.925 END TEST raid_superblock_test 00:14:41.925 ************************************ 00:14:42.184 18:17:50 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:14:42.184 18:17:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:42.184 18:17:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.184 18:17:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:42.184 ************************************ 00:14:42.184 START TEST raid_read_error_test 00:14:42.184 ************************************ 00:14:42.184 18:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:14:42.184 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:42.184 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:14:42.184 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.7caBDaDmap 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2209518 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2209518 /var/tmp/spdk-raid.sock 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2209518 ']' 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:42.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.185 18:17:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.185 [2024-07-24 18:17:50.671916] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:14:42.185 [2024-07-24 18:17:50.671957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2209518 ] 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:01.0 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:01.1 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:01.2 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:01.3 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:01.4 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:01.5 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:01.6 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:01.7 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:02.0 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:02.1 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:02.2 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:02.3 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:02.4 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:02.5 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:02.6 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b3:02.7 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:01.0 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:01.1 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:01.2 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:01.3 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:01.4 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:01.5 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:01.6 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:01.7 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:02.0 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:02.1 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:02.2 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:02.3 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:02.4 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:02.5 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:02.6 cannot be used 00:14:42.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.185 EAL: Requested device 0000:b5:02.7 cannot be used 00:14:42.185 [2024-07-24 18:17:50.762014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.445 [2024-07-24 18:17:50.832949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.445 [2024-07-24 18:17:50.882669] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.445 [2024-07-24 18:17:50.882697] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.014 18:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.014 18:17:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:43.014 18:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:43.014 18:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:43.273 BaseBdev1_malloc 00:14:43.273 18:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:43.273 true 00:14:43.273 18:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:43.532 [2024-07-24 18:17:51.962802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:43.532 [2024-07-24 18:17:51.962838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.532 [2024-07-24 18:17:51.962850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2647ed0 00:14:43.532 [2024-07-24 18:17:51.962858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.532 [2024-07-24 18:17:51.963992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.532 [2024-07-24 18:17:51.964018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:43.532 BaseBdev1 00:14:43.532 18:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:43.532 18:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:43.532 BaseBdev2_malloc 00:14:43.791 18:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:43.791 true 00:14:43.791 18:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:44.050 [2024-07-24 18:17:52.447650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:44.050 [2024-07-24 18:17:52.447684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.050 [2024-07-24 18:17:52.447697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x264cb60 00:14:44.050 [2024-07-24 18:17:52.447705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.050 [2024-07-24 18:17:52.448665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.050 [2024-07-24 18:17:52.448687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.050 BaseBdev2 00:14:44.050 18:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:44.050 18:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:44.050 BaseBdev3_malloc 00:14:44.050 18:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:44.309 true 00:14:44.309 18:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:44.569 [2024-07-24 18:17:52.964461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:44.569 [2024-07-24 18:17:52.964491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.569 [2024-07-24 18:17:52.964504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x264dad0 00:14:44.569 [2024-07-24 18:17:52.964512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.569 [2024-07-24 18:17:52.965398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.569 [2024-07-24 18:17:52.965419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:44.569 BaseBdev3 00:14:44.569 18:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:44.569 [2024-07-24 18:17:53.144975] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.569 [2024-07-24 18:17:53.145758] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.569 [2024-07-24 18:17:53.145812] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.569 [2024-07-24 18:17:53.145942] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x264f8e0 00:14:44.569 [2024-07-24 18:17:53.145949] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:44.569 [2024-07-24 18:17:53.146054] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x264f560 00:14:44.569 [2024-07-24 18:17:53.146150] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x264f8e0 00:14:44.569 [2024-07-24 18:17:53.146156] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x264f8e0 00:14:44.569 [2024-07-24 18:17:53.146216] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.569 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:44.569 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:44.829 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:44.829 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:44.829 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:44.829 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:44.829 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:44.829 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:44.829 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:44.829 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:44.829 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.829 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.829 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.829 "name": "raid_bdev1", 00:14:44.829 "uuid": "b097bf5b-dc46-42fa-a26f-53bf942104cc", 00:14:44.829 "strip_size_kb": 0, 00:14:44.829 "state": "online", 00:14:44.829 "raid_level": "raid1", 00:14:44.829 "superblock": true, 00:14:44.829 "num_base_bdevs": 3, 00:14:44.829 "num_base_bdevs_discovered": 3, 00:14:44.829 "num_base_bdevs_operational": 3, 00:14:44.829 "base_bdevs_list": [ 00:14:44.829 { 00:14:44.829 "name": "BaseBdev1", 00:14:44.829 "uuid": "962574b8-fbdc-5b14-9466-b7ec491b4d3a", 00:14:44.829 "is_configured": true, 00:14:44.829 "data_offset": 2048, 00:14:44.829 "data_size": 63488 00:14:44.829 }, 00:14:44.829 { 00:14:44.829 "name": "BaseBdev2", 00:14:44.829 "uuid": "860e5f6f-388b-576d-bc72-d7fef5f18718", 00:14:44.829 "is_configured": true, 00:14:44.829 "data_offset": 2048, 00:14:44.829 "data_size": 63488 00:14:44.829 }, 00:14:44.829 { 00:14:44.829 "name": "BaseBdev3", 00:14:44.829 "uuid": "7f6badf0-0c47-59c3-af26-9b820da55219", 00:14:44.829 "is_configured": true, 00:14:44.829 "data_offset": 2048, 00:14:44.829 "data_size": 63488 00:14:44.829 } 00:14:44.829 ] 00:14:44.829 }' 00:14:44.829 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.829 18:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.396 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:45.396 18:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:45.396 [2024-07-24 18:17:53.923195] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26543d0 00:14:46.333 18:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.592 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.850 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.850 "name": "raid_bdev1", 00:14:46.850 "uuid": "b097bf5b-dc46-42fa-a26f-53bf942104cc", 00:14:46.850 "strip_size_kb": 0, 00:14:46.850 "state": "online", 00:14:46.850 "raid_level": "raid1", 00:14:46.850 "superblock": true, 00:14:46.850 "num_base_bdevs": 3, 00:14:46.850 "num_base_bdevs_discovered": 3, 00:14:46.850 "num_base_bdevs_operational": 3, 00:14:46.850 "base_bdevs_list": [ 00:14:46.850 { 00:14:46.850 "name": "BaseBdev1", 00:14:46.850 "uuid": "962574b8-fbdc-5b14-9466-b7ec491b4d3a", 00:14:46.850 "is_configured": true, 00:14:46.850 "data_offset": 2048, 00:14:46.850 "data_size": 63488 00:14:46.850 }, 00:14:46.850 { 00:14:46.850 "name": "BaseBdev2", 00:14:46.850 "uuid": "860e5f6f-388b-576d-bc72-d7fef5f18718", 00:14:46.850 "is_configured": true, 00:14:46.850 "data_offset": 2048, 00:14:46.850 "data_size": 63488 00:14:46.850 }, 00:14:46.850 { 00:14:46.850 "name": "BaseBdev3", 00:14:46.850 "uuid": "7f6badf0-0c47-59c3-af26-9b820da55219", 00:14:46.850 "is_configured": true, 00:14:46.850 "data_offset": 2048, 00:14:46.850 "data_size": 63488 00:14:46.850 } 00:14:46.850 ] 00:14:46.850 }' 00:14:46.850 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.850 18:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.109 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:47.368 [2024-07-24 18:17:55.810598] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:47.368 [2024-07-24 18:17:55.810637] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.368 [2024-07-24 18:17:55.812724] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.368 [2024-07-24 18:17:55.812749] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.368 [2024-07-24 18:17:55.812812] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.368 [2024-07-24 18:17:55.812820] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x264f8e0 name raid_bdev1, state offline 00:14:47.368 0 00:14:47.368 18:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2209518 00:14:47.368 18:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2209518 ']' 00:14:47.368 18:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2209518 00:14:47.368 18:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:14:47.368 18:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:47.368 18:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2209518 00:14:47.368 18:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:47.368 18:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:47.368 18:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2209518' 00:14:47.368 killing process with pid 2209518 00:14:47.368 18:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2209518 00:14:47.368 [2024-07-24 18:17:55.871422] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:47.368 18:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2209518 00:14:47.368 [2024-07-24 18:17:55.888836] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:47.628 18:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.7caBDaDmap 00:14:47.628 18:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:47.628 18:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:47.628 18:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:47.628 18:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:47.628 18:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:47.628 18:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:47.628 18:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:47.628 00:14:47.628 real 0m5.477s 00:14:47.628 user 0m8.365s 00:14:47.628 sys 0m0.971s 00:14:47.628 18:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:47.628 18:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.628 ************************************ 00:14:47.628 END TEST raid_read_error_test 00:14:47.628 ************************************ 00:14:47.628 18:17:56 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:14:47.628 18:17:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:47.628 18:17:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:47.628 18:17:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:47.628 ************************************ 00:14:47.628 START TEST raid_write_error_test 00:14:47.628 ************************************ 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.N390vg4Fth 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2210649 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2210649 /var/tmp/spdk-raid.sock 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2210649 ']' 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:47.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:47.628 18:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.887 [2024-07-24 18:17:56.237774] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:14:47.887 [2024-07-24 18:17:56.237822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210649 ] 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:01.0 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:01.1 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:01.2 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:01.3 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:01.4 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:01.5 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:01.6 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:01.7 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:02.0 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:02.1 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:02.2 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:02.3 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:02.4 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:02.5 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:02.6 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b3:02.7 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:01.0 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:01.1 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:01.2 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:01.3 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:01.4 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:01.5 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:01.6 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:01.7 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:02.0 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:02.1 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:02.2 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:02.3 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:02.4 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:02.5 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:02.6 cannot be used 00:14:47.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:47.887 EAL: Requested device 0000:b5:02.7 cannot be used 00:14:47.887 [2024-07-24 18:17:56.330397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.887 [2024-07-24 18:17:56.403919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.887 [2024-07-24 18:17:56.454114] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.887 [2024-07-24 18:17:56.454141] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.455 18:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.455 18:17:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:48.455 18:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:48.455 18:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:48.714 BaseBdev1_malloc 00:14:48.714 18:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:48.973 true 00:14:48.973 18:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:48.973 [2024-07-24 18:17:57.518282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:48.973 [2024-07-24 18:17:57.518316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.973 [2024-07-24 18:17:57.518329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb8eed0 00:14:48.973 [2024-07-24 18:17:57.518337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.973 [2024-07-24 18:17:57.519517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.973 [2024-07-24 18:17:57.519540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:48.973 BaseBdev1 00:14:48.973 18:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:48.973 18:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:49.232 BaseBdev2_malloc 00:14:49.232 18:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:49.491 true 00:14:49.491 18:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:49.491 [2024-07-24 18:17:58.007118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:49.491 [2024-07-24 18:17:58.007147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.491 [2024-07-24 18:17:58.007160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb93b60 00:14:49.491 [2024-07-24 18:17:58.007168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.491 [2024-07-24 18:17:58.008133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.491 [2024-07-24 18:17:58.008155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:49.491 BaseBdev2 00:14:49.491 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:49.491 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:49.750 BaseBdev3_malloc 00:14:49.750 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:49.750 true 00:14:49.750 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:50.009 [2024-07-24 18:17:58.491947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:50.009 [2024-07-24 18:17:58.491979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.009 [2024-07-24 18:17:58.491995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb94ad0 00:14:50.009 [2024-07-24 18:17:58.492003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.009 [2024-07-24 18:17:58.492989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.009 [2024-07-24 18:17:58.493012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:50.009 BaseBdev3 00:14:50.009 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:50.268 [2024-07-24 18:17:58.660403] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.268 [2024-07-24 18:17:58.661261] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.268 [2024-07-24 18:17:58.661307] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.268 [2024-07-24 18:17:58.661443] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xb968e0 00:14:50.268 [2024-07-24 18:17:58.661450] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:50.268 [2024-07-24 18:17:58.661579] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb96560 00:14:50.268 [2024-07-24 18:17:58.661694] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb968e0 00:14:50.268 [2024-07-24 18:17:58.661701] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb968e0 00:14:50.268 [2024-07-24 18:17:58.661768] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.268 "name": "raid_bdev1", 00:14:50.268 "uuid": "2e6a3bc1-277e-4668-ac1a-cc2406e3ecdb", 00:14:50.268 "strip_size_kb": 0, 00:14:50.268 "state": "online", 00:14:50.268 "raid_level": "raid1", 00:14:50.268 "superblock": true, 00:14:50.268 "num_base_bdevs": 3, 00:14:50.268 "num_base_bdevs_discovered": 3, 00:14:50.268 "num_base_bdevs_operational": 3, 00:14:50.268 "base_bdevs_list": [ 00:14:50.268 { 00:14:50.268 "name": "BaseBdev1", 00:14:50.268 "uuid": "5c128db1-a061-503d-988e-cb03b6a1d450", 00:14:50.268 "is_configured": true, 00:14:50.268 "data_offset": 2048, 00:14:50.268 "data_size": 63488 00:14:50.268 }, 00:14:50.268 { 00:14:50.268 "name": "BaseBdev2", 00:14:50.268 "uuid": "d34727db-5c8d-59d1-b05e-587f242bd3e3", 00:14:50.268 "is_configured": true, 00:14:50.268 "data_offset": 2048, 00:14:50.268 "data_size": 63488 00:14:50.268 }, 00:14:50.268 { 00:14:50.268 "name": "BaseBdev3", 00:14:50.268 "uuid": "bbdd0a75-1589-5dcd-9bc3-f16c6a2ee688", 00:14:50.268 "is_configured": true, 00:14:50.268 "data_offset": 2048, 00:14:50.268 "data_size": 63488 00:14:50.268 } 00:14:50.268 ] 00:14:50.268 }' 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.268 18:17:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.837 18:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:50.837 18:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:50.837 [2024-07-24 18:17:59.398509] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb9b3d0 00:14:51.809 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:52.068 [2024-07-24 18:18:00.481868] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:52.068 [2024-07-24 18:18:00.481912] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:52.068 [2024-07-24 18:18:00.482085] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xb9b3d0 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.068 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.328 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.328 "name": "raid_bdev1", 00:14:52.328 "uuid": "2e6a3bc1-277e-4668-ac1a-cc2406e3ecdb", 00:14:52.328 "strip_size_kb": 0, 00:14:52.328 "state": "online", 00:14:52.328 "raid_level": "raid1", 00:14:52.328 "superblock": true, 00:14:52.328 "num_base_bdevs": 3, 00:14:52.328 "num_base_bdevs_discovered": 2, 00:14:52.328 "num_base_bdevs_operational": 2, 00:14:52.328 "base_bdevs_list": [ 00:14:52.328 { 00:14:52.328 "name": null, 00:14:52.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.328 "is_configured": false, 00:14:52.328 "data_offset": 2048, 00:14:52.328 "data_size": 63488 00:14:52.328 }, 00:14:52.328 { 00:14:52.328 "name": "BaseBdev2", 00:14:52.328 "uuid": "d34727db-5c8d-59d1-b05e-587f242bd3e3", 00:14:52.328 "is_configured": true, 00:14:52.328 "data_offset": 2048, 00:14:52.328 "data_size": 63488 00:14:52.328 }, 00:14:52.328 { 00:14:52.328 "name": "BaseBdev3", 00:14:52.328 "uuid": "bbdd0a75-1589-5dcd-9bc3-f16c6a2ee688", 00:14:52.328 "is_configured": true, 00:14:52.328 "data_offset": 2048, 00:14:52.328 "data_size": 63488 00:14:52.328 } 00:14:52.328 ] 00:14:52.328 }' 00:14:52.328 18:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.328 18:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.587 18:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:52.847 [2024-07-24 18:18:01.332933] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:52.847 [2024-07-24 18:18:01.332963] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.847 [2024-07-24 18:18:01.334942] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.847 [2024-07-24 18:18:01.334981] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.847 [2024-07-24 18:18:01.335030] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.847 [2024-07-24 18:18:01.335037] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb968e0 name raid_bdev1, state offline 00:14:52.847 0 00:14:52.847 18:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2210649 00:14:52.847 18:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2210649 ']' 00:14:52.847 18:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2210649 00:14:52.847 18:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:52.847 18:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:52.847 18:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2210649 00:14:52.847 18:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:52.847 18:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:52.847 18:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2210649' 00:14:52.847 killing process with pid 2210649 00:14:52.847 18:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2210649 00:14:52.847 [2024-07-24 18:18:01.409762] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.847 18:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2210649 00:14:52.847 [2024-07-24 18:18:01.428288] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.106 18:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.N390vg4Fth 00:14:53.106 18:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:53.106 18:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:53.106 18:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:53.106 18:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:53.106 18:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:53.106 18:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:53.106 18:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:53.106 00:14:53.106 real 0m5.451s 00:14:53.106 user 0m8.366s 00:14:53.106 sys 0m0.916s 00:14:53.106 18:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.106 18:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.106 ************************************ 00:14:53.106 END TEST raid_write_error_test 00:14:53.106 ************************************ 00:14:53.106 18:18:01 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:14:53.106 18:18:01 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:53.106 18:18:01 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:14:53.106 18:18:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:53.106 18:18:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.106 18:18:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.366 ************************************ 00:14:53.366 START TEST raid_state_function_test 00:14:53.366 ************************************ 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2211695 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2211695' 00:14:53.366 Process raid pid: 2211695 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2211695 /var/tmp/spdk-raid.sock 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2211695 ']' 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:53.366 18:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:53.367 18:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:53.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:53.367 18:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:53.367 18:18:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.367 [2024-07-24 18:18:01.766294] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:14:53.367 [2024-07-24 18:18:01.766347] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:01.0 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:01.1 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:01.2 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:01.3 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:01.4 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:01.5 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:01.6 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:01.7 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:02.0 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:02.1 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:02.2 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:02.3 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:02.4 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:02.5 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:02.6 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b3:02.7 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:01.0 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:01.1 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:01.2 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:01.3 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:01.4 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:01.5 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:01.6 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:01.7 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:02.0 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:02.1 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:02.2 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:02.3 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:02.4 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:02.5 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:02.6 cannot be used 00:14:53.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:53.367 EAL: Requested device 0000:b5:02.7 cannot be used 00:14:53.367 [2024-07-24 18:18:01.860366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.367 [2024-07-24 18:18:01.933703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.627 [2024-07-24 18:18:01.983306] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.627 [2024-07-24 18:18:01.983329] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.194 18:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.194 18:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:54.194 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:54.194 [2024-07-24 18:18:02.718105] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.194 [2024-07-24 18:18:02.718133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.194 [2024-07-24 18:18:02.718140] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.194 [2024-07-24 18:18:02.718147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.194 [2024-07-24 18:18:02.718153] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:54.194 [2024-07-24 18:18:02.718160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:54.194 [2024-07-24 18:18:02.718166] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:54.194 [2024-07-24 18:18:02.718173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:54.194 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:54.194 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.194 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:54.194 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:54.194 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:54.202 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:54.202 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.202 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.202 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.202 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.202 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.202 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.463 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.463 "name": "Existed_Raid", 00:14:54.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.463 "strip_size_kb": 64, 00:14:54.463 "state": "configuring", 00:14:54.463 "raid_level": "raid0", 00:14:54.463 "superblock": false, 00:14:54.463 "num_base_bdevs": 4, 00:14:54.463 "num_base_bdevs_discovered": 0, 00:14:54.463 "num_base_bdevs_operational": 4, 00:14:54.463 "base_bdevs_list": [ 00:14:54.463 { 00:14:54.463 "name": "BaseBdev1", 00:14:54.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.463 "is_configured": false, 00:14:54.463 "data_offset": 0, 00:14:54.463 "data_size": 0 00:14:54.463 }, 00:14:54.463 { 00:14:54.463 "name": "BaseBdev2", 00:14:54.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.463 "is_configured": false, 00:14:54.463 "data_offset": 0, 00:14:54.463 "data_size": 0 00:14:54.463 }, 00:14:54.463 { 00:14:54.463 "name": "BaseBdev3", 00:14:54.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.463 "is_configured": false, 00:14:54.463 "data_offset": 0, 00:14:54.463 "data_size": 0 00:14:54.463 }, 00:14:54.463 { 00:14:54.463 "name": "BaseBdev4", 00:14:54.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.463 "is_configured": false, 00:14:54.463 "data_offset": 0, 00:14:54.463 "data_size": 0 00:14:54.463 } 00:14:54.463 ] 00:14:54.463 }' 00:14:54.463 18:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.463 18:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 18:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:55.030 [2024-07-24 18:18:03.548127] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.030 [2024-07-24 18:18:03.548146] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16a11e0 name Existed_Raid, state configuring 00:14:55.030 18:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:55.289 [2024-07-24 18:18:03.728603] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.289 [2024-07-24 18:18:03.728621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.289 [2024-07-24 18:18:03.728631] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.289 [2024-07-24 18:18:03.728639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.289 [2024-07-24 18:18:03.728644] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.289 [2024-07-24 18:18:03.728651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.289 [2024-07-24 18:18:03.728656] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:55.289 [2024-07-24 18:18:03.728663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:55.289 18:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:55.557 [2024-07-24 18:18:03.913825] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.557 BaseBdev1 00:14:55.557 18:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:55.557 18:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:55.557 18:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:55.557 18:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:55.557 18:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:55.557 18:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:55.557 18:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:55.557 18:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.851 [ 00:14:55.851 { 00:14:55.851 "name": "BaseBdev1", 00:14:55.851 "aliases": [ 00:14:55.851 "f1818978-fb85-4233-a143-a5826fddcc47" 00:14:55.851 ], 00:14:55.851 "product_name": "Malloc disk", 00:14:55.851 "block_size": 512, 00:14:55.851 "num_blocks": 65536, 00:14:55.851 "uuid": "f1818978-fb85-4233-a143-a5826fddcc47", 00:14:55.851 "assigned_rate_limits": { 00:14:55.851 "rw_ios_per_sec": 0, 00:14:55.851 "rw_mbytes_per_sec": 0, 00:14:55.851 "r_mbytes_per_sec": 0, 00:14:55.851 "w_mbytes_per_sec": 0 00:14:55.851 }, 00:14:55.851 "claimed": true, 00:14:55.851 "claim_type": "exclusive_write", 00:14:55.851 "zoned": false, 00:14:55.851 "supported_io_types": { 00:14:55.851 "read": true, 00:14:55.851 "write": true, 00:14:55.851 "unmap": true, 00:14:55.851 "flush": true, 00:14:55.851 "reset": true, 00:14:55.851 "nvme_admin": false, 00:14:55.851 "nvme_io": false, 00:14:55.851 "nvme_io_md": false, 00:14:55.851 "write_zeroes": true, 00:14:55.851 "zcopy": true, 00:14:55.851 "get_zone_info": false, 00:14:55.851 "zone_management": false, 00:14:55.851 "zone_append": false, 00:14:55.851 "compare": false, 00:14:55.851 "compare_and_write": false, 00:14:55.851 "abort": true, 00:14:55.851 "seek_hole": false, 00:14:55.851 "seek_data": false, 00:14:55.851 "copy": true, 00:14:55.851 "nvme_iov_md": false 00:14:55.851 }, 00:14:55.851 "memory_domains": [ 00:14:55.851 { 00:14:55.851 "dma_device_id": "system", 00:14:55.851 "dma_device_type": 1 00:14:55.851 }, 00:14:55.851 { 00:14:55.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.851 "dma_device_type": 2 00:14:55.851 } 00:14:55.851 ], 00:14:55.851 "driver_specific": {} 00:14:55.851 } 00:14:55.851 ] 00:14:55.851 18:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:55.851 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:55.851 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:55.851 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:55.851 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:55.851 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.851 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:55.851 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.851 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.851 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.851 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.851 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.851 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.127 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:56.127 "name": "Existed_Raid", 00:14:56.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.127 "strip_size_kb": 64, 00:14:56.127 "state": "configuring", 00:14:56.127 "raid_level": "raid0", 00:14:56.127 "superblock": false, 00:14:56.127 "num_base_bdevs": 4, 00:14:56.127 "num_base_bdevs_discovered": 1, 00:14:56.127 "num_base_bdevs_operational": 4, 00:14:56.127 "base_bdevs_list": [ 00:14:56.127 { 00:14:56.127 "name": "BaseBdev1", 00:14:56.127 "uuid": "f1818978-fb85-4233-a143-a5826fddcc47", 00:14:56.127 "is_configured": true, 00:14:56.127 "data_offset": 0, 00:14:56.127 "data_size": 65536 00:14:56.127 }, 00:14:56.127 { 00:14:56.127 "name": "BaseBdev2", 00:14:56.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.127 "is_configured": false, 00:14:56.127 "data_offset": 0, 00:14:56.127 "data_size": 0 00:14:56.127 }, 00:14:56.127 { 00:14:56.127 "name": "BaseBdev3", 00:14:56.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.127 "is_configured": false, 00:14:56.127 "data_offset": 0, 00:14:56.127 "data_size": 0 00:14:56.127 }, 00:14:56.127 { 00:14:56.127 "name": "BaseBdev4", 00:14:56.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.127 "is_configured": false, 00:14:56.127 "data_offset": 0, 00:14:56.127 "data_size": 0 00:14:56.127 } 00:14:56.127 ] 00:14:56.127 }' 00:14:56.127 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:56.127 18:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.385 18:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:56.644 [2024-07-24 18:18:05.080842] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.644 [2024-07-24 18:18:05.080869] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16a0a50 name Existed_Raid, state configuring 00:14:56.644 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:56.904 [2024-07-24 18:18:05.253312] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.904 [2024-07-24 18:18:05.254390] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.904 [2024-07-24 18:18:05.254416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.904 [2024-07-24 18:18:05.254426] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.904 [2024-07-24 18:18:05.254434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.904 [2024-07-24 18:18:05.254440] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:56.904 [2024-07-24 18:18:05.254448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:56.904 "name": "Existed_Raid", 00:14:56.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.904 "strip_size_kb": 64, 00:14:56.904 "state": "configuring", 00:14:56.904 "raid_level": "raid0", 00:14:56.904 "superblock": false, 00:14:56.904 "num_base_bdevs": 4, 00:14:56.904 "num_base_bdevs_discovered": 1, 00:14:56.904 "num_base_bdevs_operational": 4, 00:14:56.904 "base_bdevs_list": [ 00:14:56.904 { 00:14:56.904 "name": "BaseBdev1", 00:14:56.904 "uuid": "f1818978-fb85-4233-a143-a5826fddcc47", 00:14:56.904 "is_configured": true, 00:14:56.904 "data_offset": 0, 00:14:56.904 "data_size": 65536 00:14:56.904 }, 00:14:56.904 { 00:14:56.904 "name": "BaseBdev2", 00:14:56.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.904 "is_configured": false, 00:14:56.904 "data_offset": 0, 00:14:56.904 "data_size": 0 00:14:56.904 }, 00:14:56.904 { 00:14:56.904 "name": "BaseBdev3", 00:14:56.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.904 "is_configured": false, 00:14:56.904 "data_offset": 0, 00:14:56.904 "data_size": 0 00:14:56.904 }, 00:14:56.904 { 00:14:56.904 "name": "BaseBdev4", 00:14:56.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.904 "is_configured": false, 00:14:56.904 "data_offset": 0, 00:14:56.904 "data_size": 0 00:14:56.904 } 00:14:56.904 ] 00:14:56.904 }' 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:56.904 18:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.473 18:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:57.733 [2024-07-24 18:18:06.082089] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.733 BaseBdev2 00:14:57.733 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:57.733 18:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:57.733 18:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:57.733 18:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:57.733 18:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:57.733 18:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:57.733 18:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:57.733 18:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:57.992 [ 00:14:57.992 { 00:14:57.992 "name": "BaseBdev2", 00:14:57.992 "aliases": [ 00:14:57.992 "f29491fa-9265-4729-8bbc-42e14bb69b3c" 00:14:57.992 ], 00:14:57.992 "product_name": "Malloc disk", 00:14:57.992 "block_size": 512, 00:14:57.992 "num_blocks": 65536, 00:14:57.992 "uuid": "f29491fa-9265-4729-8bbc-42e14bb69b3c", 00:14:57.992 "assigned_rate_limits": { 00:14:57.992 "rw_ios_per_sec": 0, 00:14:57.992 "rw_mbytes_per_sec": 0, 00:14:57.992 "r_mbytes_per_sec": 0, 00:14:57.992 "w_mbytes_per_sec": 0 00:14:57.992 }, 00:14:57.992 "claimed": true, 00:14:57.992 "claim_type": "exclusive_write", 00:14:57.992 "zoned": false, 00:14:57.992 "supported_io_types": { 00:14:57.992 "read": true, 00:14:57.993 "write": true, 00:14:57.993 "unmap": true, 00:14:57.993 "flush": true, 00:14:57.993 "reset": true, 00:14:57.993 "nvme_admin": false, 00:14:57.993 "nvme_io": false, 00:14:57.993 "nvme_io_md": false, 00:14:57.993 "write_zeroes": true, 00:14:57.993 "zcopy": true, 00:14:57.993 "get_zone_info": false, 00:14:57.993 "zone_management": false, 00:14:57.993 "zone_append": false, 00:14:57.993 "compare": false, 00:14:57.993 "compare_and_write": false, 00:14:57.993 "abort": true, 00:14:57.993 "seek_hole": false, 00:14:57.993 "seek_data": false, 00:14:57.993 "copy": true, 00:14:57.993 "nvme_iov_md": false 00:14:57.993 }, 00:14:57.993 "memory_domains": [ 00:14:57.993 { 00:14:57.993 "dma_device_id": "system", 00:14:57.993 "dma_device_type": 1 00:14:57.993 }, 00:14:57.993 { 00:14:57.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.993 "dma_device_type": 2 00:14:57.993 } 00:14:57.993 ], 00:14:57.993 "driver_specific": {} 00:14:57.993 } 00:14:57.993 ] 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.993 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.252 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:58.252 "name": "Existed_Raid", 00:14:58.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.252 "strip_size_kb": 64, 00:14:58.252 "state": "configuring", 00:14:58.252 "raid_level": "raid0", 00:14:58.252 "superblock": false, 00:14:58.252 "num_base_bdevs": 4, 00:14:58.252 "num_base_bdevs_discovered": 2, 00:14:58.252 "num_base_bdevs_operational": 4, 00:14:58.252 "base_bdevs_list": [ 00:14:58.252 { 00:14:58.252 "name": "BaseBdev1", 00:14:58.252 "uuid": "f1818978-fb85-4233-a143-a5826fddcc47", 00:14:58.252 "is_configured": true, 00:14:58.252 "data_offset": 0, 00:14:58.252 "data_size": 65536 00:14:58.252 }, 00:14:58.252 { 00:14:58.252 "name": "BaseBdev2", 00:14:58.252 "uuid": "f29491fa-9265-4729-8bbc-42e14bb69b3c", 00:14:58.252 "is_configured": true, 00:14:58.252 "data_offset": 0, 00:14:58.252 "data_size": 65536 00:14:58.252 }, 00:14:58.252 { 00:14:58.252 "name": "BaseBdev3", 00:14:58.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.252 "is_configured": false, 00:14:58.252 "data_offset": 0, 00:14:58.252 "data_size": 0 00:14:58.252 }, 00:14:58.252 { 00:14:58.252 "name": "BaseBdev4", 00:14:58.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.252 "is_configured": false, 00:14:58.252 "data_offset": 0, 00:14:58.252 "data_size": 0 00:14:58.252 } 00:14:58.252 ] 00:14:58.252 }' 00:14:58.252 18:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:58.252 18:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.821 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:58.821 [2024-07-24 18:18:07.275980] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.821 BaseBdev3 00:14:58.821 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:58.821 18:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:58.821 18:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:58.821 18:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:58.821 18:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:58.821 18:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:58.821 18:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:59.081 [ 00:14:59.081 { 00:14:59.081 "name": "BaseBdev3", 00:14:59.081 "aliases": [ 00:14:59.081 "510a218e-f973-4e63-886a-0773f0de45ec" 00:14:59.081 ], 00:14:59.081 "product_name": "Malloc disk", 00:14:59.081 "block_size": 512, 00:14:59.081 "num_blocks": 65536, 00:14:59.081 "uuid": "510a218e-f973-4e63-886a-0773f0de45ec", 00:14:59.081 "assigned_rate_limits": { 00:14:59.081 "rw_ios_per_sec": 0, 00:14:59.081 "rw_mbytes_per_sec": 0, 00:14:59.081 "r_mbytes_per_sec": 0, 00:14:59.081 "w_mbytes_per_sec": 0 00:14:59.081 }, 00:14:59.081 "claimed": true, 00:14:59.081 "claim_type": "exclusive_write", 00:14:59.081 "zoned": false, 00:14:59.081 "supported_io_types": { 00:14:59.081 "read": true, 00:14:59.081 "write": true, 00:14:59.081 "unmap": true, 00:14:59.081 "flush": true, 00:14:59.081 "reset": true, 00:14:59.081 "nvme_admin": false, 00:14:59.081 "nvme_io": false, 00:14:59.081 "nvme_io_md": false, 00:14:59.081 "write_zeroes": true, 00:14:59.081 "zcopy": true, 00:14:59.081 "get_zone_info": false, 00:14:59.081 "zone_management": false, 00:14:59.081 "zone_append": false, 00:14:59.081 "compare": false, 00:14:59.081 "compare_and_write": false, 00:14:59.081 "abort": true, 00:14:59.081 "seek_hole": false, 00:14:59.081 "seek_data": false, 00:14:59.081 "copy": true, 00:14:59.081 "nvme_iov_md": false 00:14:59.081 }, 00:14:59.081 "memory_domains": [ 00:14:59.081 { 00:14:59.081 "dma_device_id": "system", 00:14:59.081 "dma_device_type": 1 00:14:59.081 }, 00:14:59.081 { 00:14:59.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.081 "dma_device_type": 2 00:14:59.081 } 00:14:59.081 ], 00:14:59.081 "driver_specific": {} 00:14:59.081 } 00:14:59.081 ] 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.081 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.341 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.341 "name": "Existed_Raid", 00:14:59.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.341 "strip_size_kb": 64, 00:14:59.341 "state": "configuring", 00:14:59.341 "raid_level": "raid0", 00:14:59.341 "superblock": false, 00:14:59.341 "num_base_bdevs": 4, 00:14:59.341 "num_base_bdevs_discovered": 3, 00:14:59.341 "num_base_bdevs_operational": 4, 00:14:59.341 "base_bdevs_list": [ 00:14:59.341 { 00:14:59.341 "name": "BaseBdev1", 00:14:59.341 "uuid": "f1818978-fb85-4233-a143-a5826fddcc47", 00:14:59.341 "is_configured": true, 00:14:59.341 "data_offset": 0, 00:14:59.341 "data_size": 65536 00:14:59.341 }, 00:14:59.341 { 00:14:59.341 "name": "BaseBdev2", 00:14:59.341 "uuid": "f29491fa-9265-4729-8bbc-42e14bb69b3c", 00:14:59.341 "is_configured": true, 00:14:59.341 "data_offset": 0, 00:14:59.341 "data_size": 65536 00:14:59.341 }, 00:14:59.341 { 00:14:59.341 "name": "BaseBdev3", 00:14:59.341 "uuid": "510a218e-f973-4e63-886a-0773f0de45ec", 00:14:59.341 "is_configured": true, 00:14:59.341 "data_offset": 0, 00:14:59.341 "data_size": 65536 00:14:59.341 }, 00:14:59.341 { 00:14:59.341 "name": "BaseBdev4", 00:14:59.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.341 "is_configured": false, 00:14:59.341 "data_offset": 0, 00:14:59.341 "data_size": 0 00:14:59.341 } 00:14:59.341 ] 00:14:59.341 }' 00:14:59.341 18:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.341 18:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.909 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:59.909 [2024-07-24 18:18:08.437798] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.909 [2024-07-24 18:18:08.437825] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x16a1ab0 00:14:59.909 [2024-07-24 18:18:08.437830] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:59.909 [2024-07-24 18:18:08.437965] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1854cd0 00:14:59.909 [2024-07-24 18:18:08.438050] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16a1ab0 00:14:59.909 [2024-07-24 18:18:08.438056] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x16a1ab0 00:14:59.909 [2024-07-24 18:18:08.438171] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.909 BaseBdev4 00:14:59.909 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:59.909 18:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:59.909 18:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:59.909 18:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:59.909 18:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:59.909 18:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:59.909 18:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:00.169 18:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:00.429 [ 00:15:00.429 { 00:15:00.429 "name": "BaseBdev4", 00:15:00.429 "aliases": [ 00:15:00.429 "c0efad2c-c3d3-4a5d-ad90-08ed0796fec6" 00:15:00.429 ], 00:15:00.429 "product_name": "Malloc disk", 00:15:00.429 "block_size": 512, 00:15:00.429 "num_blocks": 65536, 00:15:00.429 "uuid": "c0efad2c-c3d3-4a5d-ad90-08ed0796fec6", 00:15:00.429 "assigned_rate_limits": { 00:15:00.429 "rw_ios_per_sec": 0, 00:15:00.429 "rw_mbytes_per_sec": 0, 00:15:00.429 "r_mbytes_per_sec": 0, 00:15:00.429 "w_mbytes_per_sec": 0 00:15:00.429 }, 00:15:00.429 "claimed": true, 00:15:00.429 "claim_type": "exclusive_write", 00:15:00.429 "zoned": false, 00:15:00.429 "supported_io_types": { 00:15:00.429 "read": true, 00:15:00.429 "write": true, 00:15:00.429 "unmap": true, 00:15:00.429 "flush": true, 00:15:00.429 "reset": true, 00:15:00.429 "nvme_admin": false, 00:15:00.429 "nvme_io": false, 00:15:00.429 "nvme_io_md": false, 00:15:00.429 "write_zeroes": true, 00:15:00.429 "zcopy": true, 00:15:00.429 "get_zone_info": false, 00:15:00.429 "zone_management": false, 00:15:00.429 "zone_append": false, 00:15:00.429 "compare": false, 00:15:00.429 "compare_and_write": false, 00:15:00.429 "abort": true, 00:15:00.429 "seek_hole": false, 00:15:00.429 "seek_data": false, 00:15:00.429 "copy": true, 00:15:00.429 "nvme_iov_md": false 00:15:00.429 }, 00:15:00.429 "memory_domains": [ 00:15:00.429 { 00:15:00.429 "dma_device_id": "system", 00:15:00.429 "dma_device_type": 1 00:15:00.429 }, 00:15:00.429 { 00:15:00.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.429 "dma_device_type": 2 00:15:00.429 } 00:15:00.429 ], 00:15:00.429 "driver_specific": {} 00:15:00.429 } 00:15:00.429 ] 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.429 "name": "Existed_Raid", 00:15:00.429 "uuid": "f99a3ac8-e915-44e3-a44e-4a4e19054a78", 00:15:00.429 "strip_size_kb": 64, 00:15:00.429 "state": "online", 00:15:00.429 "raid_level": "raid0", 00:15:00.429 "superblock": false, 00:15:00.429 "num_base_bdevs": 4, 00:15:00.429 "num_base_bdevs_discovered": 4, 00:15:00.429 "num_base_bdevs_operational": 4, 00:15:00.429 "base_bdevs_list": [ 00:15:00.429 { 00:15:00.429 "name": "BaseBdev1", 00:15:00.429 "uuid": "f1818978-fb85-4233-a143-a5826fddcc47", 00:15:00.429 "is_configured": true, 00:15:00.429 "data_offset": 0, 00:15:00.429 "data_size": 65536 00:15:00.429 }, 00:15:00.429 { 00:15:00.429 "name": "BaseBdev2", 00:15:00.429 "uuid": "f29491fa-9265-4729-8bbc-42e14bb69b3c", 00:15:00.429 "is_configured": true, 00:15:00.429 "data_offset": 0, 00:15:00.429 "data_size": 65536 00:15:00.429 }, 00:15:00.429 { 00:15:00.429 "name": "BaseBdev3", 00:15:00.429 "uuid": "510a218e-f973-4e63-886a-0773f0de45ec", 00:15:00.429 "is_configured": true, 00:15:00.429 "data_offset": 0, 00:15:00.429 "data_size": 65536 00:15:00.429 }, 00:15:00.429 { 00:15:00.429 "name": "BaseBdev4", 00:15:00.429 "uuid": "c0efad2c-c3d3-4a5d-ad90-08ed0796fec6", 00:15:00.429 "is_configured": true, 00:15:00.429 "data_offset": 0, 00:15:00.429 "data_size": 65536 00:15:00.429 } 00:15:00.429 ] 00:15:00.429 }' 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.429 18:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.999 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:00.999 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:00.999 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:00.999 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:00.999 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:00.999 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:00.999 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:00.999 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:00.999 [2024-07-24 18:18:09.581114] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.259 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:01.259 "name": "Existed_Raid", 00:15:01.259 "aliases": [ 00:15:01.259 "f99a3ac8-e915-44e3-a44e-4a4e19054a78" 00:15:01.259 ], 00:15:01.259 "product_name": "Raid Volume", 00:15:01.259 "block_size": 512, 00:15:01.259 "num_blocks": 262144, 00:15:01.259 "uuid": "f99a3ac8-e915-44e3-a44e-4a4e19054a78", 00:15:01.259 "assigned_rate_limits": { 00:15:01.259 "rw_ios_per_sec": 0, 00:15:01.259 "rw_mbytes_per_sec": 0, 00:15:01.259 "r_mbytes_per_sec": 0, 00:15:01.259 "w_mbytes_per_sec": 0 00:15:01.259 }, 00:15:01.259 "claimed": false, 00:15:01.259 "zoned": false, 00:15:01.259 "supported_io_types": { 00:15:01.259 "read": true, 00:15:01.259 "write": true, 00:15:01.259 "unmap": true, 00:15:01.259 "flush": true, 00:15:01.259 "reset": true, 00:15:01.259 "nvme_admin": false, 00:15:01.259 "nvme_io": false, 00:15:01.259 "nvme_io_md": false, 00:15:01.259 "write_zeroes": true, 00:15:01.260 "zcopy": false, 00:15:01.260 "get_zone_info": false, 00:15:01.260 "zone_management": false, 00:15:01.260 "zone_append": false, 00:15:01.260 "compare": false, 00:15:01.260 "compare_and_write": false, 00:15:01.260 "abort": false, 00:15:01.260 "seek_hole": false, 00:15:01.260 "seek_data": false, 00:15:01.260 "copy": false, 00:15:01.260 "nvme_iov_md": false 00:15:01.260 }, 00:15:01.260 "memory_domains": [ 00:15:01.260 { 00:15:01.260 "dma_device_id": "system", 00:15:01.260 "dma_device_type": 1 00:15:01.260 }, 00:15:01.260 { 00:15:01.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.260 "dma_device_type": 2 00:15:01.260 }, 00:15:01.260 { 00:15:01.260 "dma_device_id": "system", 00:15:01.260 "dma_device_type": 1 00:15:01.260 }, 00:15:01.260 { 00:15:01.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.260 "dma_device_type": 2 00:15:01.260 }, 00:15:01.260 { 00:15:01.260 "dma_device_id": "system", 00:15:01.260 "dma_device_type": 1 00:15:01.260 }, 00:15:01.260 { 00:15:01.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.260 "dma_device_type": 2 00:15:01.260 }, 00:15:01.260 { 00:15:01.260 "dma_device_id": "system", 00:15:01.260 "dma_device_type": 1 00:15:01.260 }, 00:15:01.260 { 00:15:01.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.260 "dma_device_type": 2 00:15:01.260 } 00:15:01.260 ], 00:15:01.260 "driver_specific": { 00:15:01.260 "raid": { 00:15:01.260 "uuid": "f99a3ac8-e915-44e3-a44e-4a4e19054a78", 00:15:01.260 "strip_size_kb": 64, 00:15:01.260 "state": "online", 00:15:01.260 "raid_level": "raid0", 00:15:01.260 "superblock": false, 00:15:01.260 "num_base_bdevs": 4, 00:15:01.260 "num_base_bdevs_discovered": 4, 00:15:01.260 "num_base_bdevs_operational": 4, 00:15:01.260 "base_bdevs_list": [ 00:15:01.260 { 00:15:01.260 "name": "BaseBdev1", 00:15:01.260 "uuid": "f1818978-fb85-4233-a143-a5826fddcc47", 00:15:01.260 "is_configured": true, 00:15:01.260 "data_offset": 0, 00:15:01.260 "data_size": 65536 00:15:01.260 }, 00:15:01.260 { 00:15:01.260 "name": "BaseBdev2", 00:15:01.260 "uuid": "f29491fa-9265-4729-8bbc-42e14bb69b3c", 00:15:01.260 "is_configured": true, 00:15:01.260 "data_offset": 0, 00:15:01.260 "data_size": 65536 00:15:01.260 }, 00:15:01.260 { 00:15:01.260 "name": "BaseBdev3", 00:15:01.260 "uuid": "510a218e-f973-4e63-886a-0773f0de45ec", 00:15:01.260 "is_configured": true, 00:15:01.260 "data_offset": 0, 00:15:01.260 "data_size": 65536 00:15:01.260 }, 00:15:01.260 { 00:15:01.260 "name": "BaseBdev4", 00:15:01.260 "uuid": "c0efad2c-c3d3-4a5d-ad90-08ed0796fec6", 00:15:01.260 "is_configured": true, 00:15:01.260 "data_offset": 0, 00:15:01.260 "data_size": 65536 00:15:01.260 } 00:15:01.260 ] 00:15:01.260 } 00:15:01.260 } 00:15:01.260 }' 00:15:01.260 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.260 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:01.260 BaseBdev2 00:15:01.260 BaseBdev3 00:15:01.260 BaseBdev4' 00:15:01.260 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:01.260 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:01.260 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:01.260 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:01.260 "name": "BaseBdev1", 00:15:01.260 "aliases": [ 00:15:01.260 "f1818978-fb85-4233-a143-a5826fddcc47" 00:15:01.260 ], 00:15:01.260 "product_name": "Malloc disk", 00:15:01.260 "block_size": 512, 00:15:01.260 "num_blocks": 65536, 00:15:01.260 "uuid": "f1818978-fb85-4233-a143-a5826fddcc47", 00:15:01.260 "assigned_rate_limits": { 00:15:01.260 "rw_ios_per_sec": 0, 00:15:01.260 "rw_mbytes_per_sec": 0, 00:15:01.260 "r_mbytes_per_sec": 0, 00:15:01.260 "w_mbytes_per_sec": 0 00:15:01.260 }, 00:15:01.260 "claimed": true, 00:15:01.260 "claim_type": "exclusive_write", 00:15:01.260 "zoned": false, 00:15:01.260 "supported_io_types": { 00:15:01.260 "read": true, 00:15:01.260 "write": true, 00:15:01.260 "unmap": true, 00:15:01.260 "flush": true, 00:15:01.260 "reset": true, 00:15:01.260 "nvme_admin": false, 00:15:01.260 "nvme_io": false, 00:15:01.260 "nvme_io_md": false, 00:15:01.260 "write_zeroes": true, 00:15:01.260 "zcopy": true, 00:15:01.260 "get_zone_info": false, 00:15:01.260 "zone_management": false, 00:15:01.260 "zone_append": false, 00:15:01.260 "compare": false, 00:15:01.260 "compare_and_write": false, 00:15:01.260 "abort": true, 00:15:01.260 "seek_hole": false, 00:15:01.260 "seek_data": false, 00:15:01.260 "copy": true, 00:15:01.260 "nvme_iov_md": false 00:15:01.260 }, 00:15:01.260 "memory_domains": [ 00:15:01.260 { 00:15:01.260 "dma_device_id": "system", 00:15:01.260 "dma_device_type": 1 00:15:01.260 }, 00:15:01.260 { 00:15:01.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.260 "dma_device_type": 2 00:15:01.260 } 00:15:01.260 ], 00:15:01.260 "driver_specific": {} 00:15:01.260 }' 00:15:01.260 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.520 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.520 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:01.520 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.520 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.520 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:01.520 18:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.520 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.520 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:01.520 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.520 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.520 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:01.520 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:01.781 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:01.781 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:01.781 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:01.781 "name": "BaseBdev2", 00:15:01.781 "aliases": [ 00:15:01.781 "f29491fa-9265-4729-8bbc-42e14bb69b3c" 00:15:01.781 ], 00:15:01.781 "product_name": "Malloc disk", 00:15:01.781 "block_size": 512, 00:15:01.781 "num_blocks": 65536, 00:15:01.781 "uuid": "f29491fa-9265-4729-8bbc-42e14bb69b3c", 00:15:01.781 "assigned_rate_limits": { 00:15:01.781 "rw_ios_per_sec": 0, 00:15:01.781 "rw_mbytes_per_sec": 0, 00:15:01.781 "r_mbytes_per_sec": 0, 00:15:01.781 "w_mbytes_per_sec": 0 00:15:01.781 }, 00:15:01.781 "claimed": true, 00:15:01.781 "claim_type": "exclusive_write", 00:15:01.781 "zoned": false, 00:15:01.781 "supported_io_types": { 00:15:01.781 "read": true, 00:15:01.781 "write": true, 00:15:01.781 "unmap": true, 00:15:01.781 "flush": true, 00:15:01.781 "reset": true, 00:15:01.781 "nvme_admin": false, 00:15:01.781 "nvme_io": false, 00:15:01.781 "nvme_io_md": false, 00:15:01.781 "write_zeroes": true, 00:15:01.781 "zcopy": true, 00:15:01.781 "get_zone_info": false, 00:15:01.781 "zone_management": false, 00:15:01.781 "zone_append": false, 00:15:01.781 "compare": false, 00:15:01.781 "compare_and_write": false, 00:15:01.781 "abort": true, 00:15:01.781 "seek_hole": false, 00:15:01.781 "seek_data": false, 00:15:01.781 "copy": true, 00:15:01.781 "nvme_iov_md": false 00:15:01.781 }, 00:15:01.781 "memory_domains": [ 00:15:01.781 { 00:15:01.781 "dma_device_id": "system", 00:15:01.781 "dma_device_type": 1 00:15:01.781 }, 00:15:01.781 { 00:15:01.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.781 "dma_device_type": 2 00:15:01.781 } 00:15:01.781 ], 00:15:01.781 "driver_specific": {} 00:15:01.781 }' 00:15:01.781 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.781 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.781 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:01.781 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.041 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.041 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:02.041 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.041 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.041 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:02.041 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.041 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.041 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:02.041 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:02.041 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:02.041 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:02.301 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:02.301 "name": "BaseBdev3", 00:15:02.301 "aliases": [ 00:15:02.301 "510a218e-f973-4e63-886a-0773f0de45ec" 00:15:02.301 ], 00:15:02.301 "product_name": "Malloc disk", 00:15:02.301 "block_size": 512, 00:15:02.301 "num_blocks": 65536, 00:15:02.301 "uuid": "510a218e-f973-4e63-886a-0773f0de45ec", 00:15:02.301 "assigned_rate_limits": { 00:15:02.301 "rw_ios_per_sec": 0, 00:15:02.301 "rw_mbytes_per_sec": 0, 00:15:02.301 "r_mbytes_per_sec": 0, 00:15:02.301 "w_mbytes_per_sec": 0 00:15:02.301 }, 00:15:02.301 "claimed": true, 00:15:02.301 "claim_type": "exclusive_write", 00:15:02.301 "zoned": false, 00:15:02.301 "supported_io_types": { 00:15:02.301 "read": true, 00:15:02.301 "write": true, 00:15:02.301 "unmap": true, 00:15:02.301 "flush": true, 00:15:02.301 "reset": true, 00:15:02.301 "nvme_admin": false, 00:15:02.301 "nvme_io": false, 00:15:02.301 "nvme_io_md": false, 00:15:02.301 "write_zeroes": true, 00:15:02.301 "zcopy": true, 00:15:02.301 "get_zone_info": false, 00:15:02.301 "zone_management": false, 00:15:02.301 "zone_append": false, 00:15:02.301 "compare": false, 00:15:02.301 "compare_and_write": false, 00:15:02.301 "abort": true, 00:15:02.301 "seek_hole": false, 00:15:02.301 "seek_data": false, 00:15:02.301 "copy": true, 00:15:02.301 "nvme_iov_md": false 00:15:02.301 }, 00:15:02.301 "memory_domains": [ 00:15:02.301 { 00:15:02.301 "dma_device_id": "system", 00:15:02.301 "dma_device_type": 1 00:15:02.301 }, 00:15:02.301 { 00:15:02.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.301 "dma_device_type": 2 00:15:02.301 } 00:15:02.301 ], 00:15:02.301 "driver_specific": {} 00:15:02.301 }' 00:15:02.301 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.301 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.301 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:02.301 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.301 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.561 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:02.561 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.561 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.561 18:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:02.561 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.561 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.561 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:02.561 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:02.561 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:02.561 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:02.821 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:02.821 "name": "BaseBdev4", 00:15:02.821 "aliases": [ 00:15:02.821 "c0efad2c-c3d3-4a5d-ad90-08ed0796fec6" 00:15:02.821 ], 00:15:02.821 "product_name": "Malloc disk", 00:15:02.821 "block_size": 512, 00:15:02.821 "num_blocks": 65536, 00:15:02.821 "uuid": "c0efad2c-c3d3-4a5d-ad90-08ed0796fec6", 00:15:02.821 "assigned_rate_limits": { 00:15:02.821 "rw_ios_per_sec": 0, 00:15:02.821 "rw_mbytes_per_sec": 0, 00:15:02.821 "r_mbytes_per_sec": 0, 00:15:02.821 "w_mbytes_per_sec": 0 00:15:02.821 }, 00:15:02.821 "claimed": true, 00:15:02.821 "claim_type": "exclusive_write", 00:15:02.821 "zoned": false, 00:15:02.821 "supported_io_types": { 00:15:02.821 "read": true, 00:15:02.821 "write": true, 00:15:02.821 "unmap": true, 00:15:02.821 "flush": true, 00:15:02.821 "reset": true, 00:15:02.821 "nvme_admin": false, 00:15:02.821 "nvme_io": false, 00:15:02.821 "nvme_io_md": false, 00:15:02.821 "write_zeroes": true, 00:15:02.821 "zcopy": true, 00:15:02.821 "get_zone_info": false, 00:15:02.821 "zone_management": false, 00:15:02.821 "zone_append": false, 00:15:02.821 "compare": false, 00:15:02.821 "compare_and_write": false, 00:15:02.821 "abort": true, 00:15:02.821 "seek_hole": false, 00:15:02.821 "seek_data": false, 00:15:02.821 "copy": true, 00:15:02.821 "nvme_iov_md": false 00:15:02.821 }, 00:15:02.821 "memory_domains": [ 00:15:02.821 { 00:15:02.821 "dma_device_id": "system", 00:15:02.821 "dma_device_type": 1 00:15:02.821 }, 00:15:02.821 { 00:15:02.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.821 "dma_device_type": 2 00:15:02.821 } 00:15:02.821 ], 00:15:02.821 "driver_specific": {} 00:15:02.821 }' 00:15:02.821 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.821 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.821 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:02.821 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.821 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.821 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:02.821 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.081 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.081 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.081 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.081 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.081 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.081 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:03.341 [2024-07-24 18:18:11.694396] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.341 [2024-07-24 18:18:11.694414] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.341 [2024-07-24 18:18:11.694448] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.341 "name": "Existed_Raid", 00:15:03.341 "uuid": "f99a3ac8-e915-44e3-a44e-4a4e19054a78", 00:15:03.341 "strip_size_kb": 64, 00:15:03.341 "state": "offline", 00:15:03.341 "raid_level": "raid0", 00:15:03.341 "superblock": false, 00:15:03.341 "num_base_bdevs": 4, 00:15:03.341 "num_base_bdevs_discovered": 3, 00:15:03.341 "num_base_bdevs_operational": 3, 00:15:03.341 "base_bdevs_list": [ 00:15:03.341 { 00:15:03.341 "name": null, 00:15:03.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.341 "is_configured": false, 00:15:03.341 "data_offset": 0, 00:15:03.341 "data_size": 65536 00:15:03.341 }, 00:15:03.341 { 00:15:03.341 "name": "BaseBdev2", 00:15:03.341 "uuid": "f29491fa-9265-4729-8bbc-42e14bb69b3c", 00:15:03.341 "is_configured": true, 00:15:03.341 "data_offset": 0, 00:15:03.341 "data_size": 65536 00:15:03.341 }, 00:15:03.341 { 00:15:03.341 "name": "BaseBdev3", 00:15:03.341 "uuid": "510a218e-f973-4e63-886a-0773f0de45ec", 00:15:03.341 "is_configured": true, 00:15:03.341 "data_offset": 0, 00:15:03.341 "data_size": 65536 00:15:03.341 }, 00:15:03.341 { 00:15:03.341 "name": "BaseBdev4", 00:15:03.341 "uuid": "c0efad2c-c3d3-4a5d-ad90-08ed0796fec6", 00:15:03.341 "is_configured": true, 00:15:03.341 "data_offset": 0, 00:15:03.341 "data_size": 65536 00:15:03.341 } 00:15:03.341 ] 00:15:03.341 }' 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.341 18:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.911 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:03.911 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:03.911 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:03.911 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.171 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:04.171 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:04.171 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:04.171 [2024-07-24 18:18:12.713804] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:04.171 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:04.171 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:04.171 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.171 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:04.430 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:04.430 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:04.430 18:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:04.690 [2024-07-24 18:18:13.064500] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:04.690 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:04.690 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:04.690 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.690 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:04.690 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:04.690 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:04.690 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:04.949 [2024-07-24 18:18:13.414905] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:04.949 [2024-07-24 18:18:13.414934] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16a1ab0 name Existed_Raid, state offline 00:15:04.949 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:04.949 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:04.949 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.949 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:05.208 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:05.208 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:05.208 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:05.208 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:05.208 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:05.208 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:05.208 BaseBdev2 00:15:05.208 18:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:05.208 18:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:05.208 18:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:05.208 18:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:05.208 18:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:05.208 18:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:05.208 18:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.467 18:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:05.727 [ 00:15:05.727 { 00:15:05.727 "name": "BaseBdev2", 00:15:05.727 "aliases": [ 00:15:05.727 "155e9c19-d125-4aac-a6c7-0daa31aa3fa6" 00:15:05.727 ], 00:15:05.727 "product_name": "Malloc disk", 00:15:05.727 "block_size": 512, 00:15:05.727 "num_blocks": 65536, 00:15:05.727 "uuid": "155e9c19-d125-4aac-a6c7-0daa31aa3fa6", 00:15:05.727 "assigned_rate_limits": { 00:15:05.727 "rw_ios_per_sec": 0, 00:15:05.727 "rw_mbytes_per_sec": 0, 00:15:05.727 "r_mbytes_per_sec": 0, 00:15:05.727 "w_mbytes_per_sec": 0 00:15:05.727 }, 00:15:05.727 "claimed": false, 00:15:05.727 "zoned": false, 00:15:05.727 "supported_io_types": { 00:15:05.727 "read": true, 00:15:05.727 "write": true, 00:15:05.727 "unmap": true, 00:15:05.727 "flush": true, 00:15:05.727 "reset": true, 00:15:05.727 "nvme_admin": false, 00:15:05.727 "nvme_io": false, 00:15:05.727 "nvme_io_md": false, 00:15:05.727 "write_zeroes": true, 00:15:05.727 "zcopy": true, 00:15:05.727 "get_zone_info": false, 00:15:05.727 "zone_management": false, 00:15:05.727 "zone_append": false, 00:15:05.727 "compare": false, 00:15:05.727 "compare_and_write": false, 00:15:05.727 "abort": true, 00:15:05.727 "seek_hole": false, 00:15:05.727 "seek_data": false, 00:15:05.727 "copy": true, 00:15:05.727 "nvme_iov_md": false 00:15:05.727 }, 00:15:05.727 "memory_domains": [ 00:15:05.727 { 00:15:05.727 "dma_device_id": "system", 00:15:05.727 "dma_device_type": 1 00:15:05.727 }, 00:15:05.727 { 00:15:05.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.727 "dma_device_type": 2 00:15:05.727 } 00:15:05.727 ], 00:15:05.727 "driver_specific": {} 00:15:05.727 } 00:15:05.727 ] 00:15:05.727 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:05.727 18:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:05.727 18:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:05.727 18:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:05.727 BaseBdev3 00:15:05.727 18:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:05.727 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:05.727 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:05.727 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:05.727 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:05.727 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:05.727 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.987 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:06.247 [ 00:15:06.247 { 00:15:06.247 "name": "BaseBdev3", 00:15:06.247 "aliases": [ 00:15:06.247 "56e3375e-6781-4cd6-8fa4-51d06e14d3eb" 00:15:06.247 ], 00:15:06.247 "product_name": "Malloc disk", 00:15:06.247 "block_size": 512, 00:15:06.247 "num_blocks": 65536, 00:15:06.247 "uuid": "56e3375e-6781-4cd6-8fa4-51d06e14d3eb", 00:15:06.247 "assigned_rate_limits": { 00:15:06.247 "rw_ios_per_sec": 0, 00:15:06.247 "rw_mbytes_per_sec": 0, 00:15:06.247 "r_mbytes_per_sec": 0, 00:15:06.247 "w_mbytes_per_sec": 0 00:15:06.247 }, 00:15:06.247 "claimed": false, 00:15:06.247 "zoned": false, 00:15:06.247 "supported_io_types": { 00:15:06.247 "read": true, 00:15:06.247 "write": true, 00:15:06.247 "unmap": true, 00:15:06.247 "flush": true, 00:15:06.247 "reset": true, 00:15:06.247 "nvme_admin": false, 00:15:06.247 "nvme_io": false, 00:15:06.247 "nvme_io_md": false, 00:15:06.247 "write_zeroes": true, 00:15:06.247 "zcopy": true, 00:15:06.247 "get_zone_info": false, 00:15:06.247 "zone_management": false, 00:15:06.247 "zone_append": false, 00:15:06.247 "compare": false, 00:15:06.247 "compare_and_write": false, 00:15:06.247 "abort": true, 00:15:06.247 "seek_hole": false, 00:15:06.247 "seek_data": false, 00:15:06.247 "copy": true, 00:15:06.247 "nvme_iov_md": false 00:15:06.247 }, 00:15:06.247 "memory_domains": [ 00:15:06.247 { 00:15:06.247 "dma_device_id": "system", 00:15:06.247 "dma_device_type": 1 00:15:06.247 }, 00:15:06.247 { 00:15:06.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.247 "dma_device_type": 2 00:15:06.247 } 00:15:06.247 ], 00:15:06.247 "driver_specific": {} 00:15:06.247 } 00:15:06.247 ] 00:15:06.247 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:06.247 18:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:06.247 18:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:06.247 18:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:06.247 BaseBdev4 00:15:06.247 18:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:06.247 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:06.247 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:06.247 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:06.247 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:06.247 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:06.247 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:06.506 18:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:06.766 [ 00:15:06.766 { 00:15:06.766 "name": "BaseBdev4", 00:15:06.766 "aliases": [ 00:15:06.766 "7b4119f2-789b-481b-8086-fe018fbf7ff6" 00:15:06.766 ], 00:15:06.766 "product_name": "Malloc disk", 00:15:06.767 "block_size": 512, 00:15:06.767 "num_blocks": 65536, 00:15:06.767 "uuid": "7b4119f2-789b-481b-8086-fe018fbf7ff6", 00:15:06.767 "assigned_rate_limits": { 00:15:06.767 "rw_ios_per_sec": 0, 00:15:06.767 "rw_mbytes_per_sec": 0, 00:15:06.767 "r_mbytes_per_sec": 0, 00:15:06.767 "w_mbytes_per_sec": 0 00:15:06.767 }, 00:15:06.767 "claimed": false, 00:15:06.767 "zoned": false, 00:15:06.767 "supported_io_types": { 00:15:06.767 "read": true, 00:15:06.767 "write": true, 00:15:06.767 "unmap": true, 00:15:06.767 "flush": true, 00:15:06.767 "reset": true, 00:15:06.767 "nvme_admin": false, 00:15:06.767 "nvme_io": false, 00:15:06.767 "nvme_io_md": false, 00:15:06.767 "write_zeroes": true, 00:15:06.767 "zcopy": true, 00:15:06.767 "get_zone_info": false, 00:15:06.767 "zone_management": false, 00:15:06.767 "zone_append": false, 00:15:06.767 "compare": false, 00:15:06.767 "compare_and_write": false, 00:15:06.767 "abort": true, 00:15:06.767 "seek_hole": false, 00:15:06.767 "seek_data": false, 00:15:06.767 "copy": true, 00:15:06.767 "nvme_iov_md": false 00:15:06.767 }, 00:15:06.767 "memory_domains": [ 00:15:06.767 { 00:15:06.767 "dma_device_id": "system", 00:15:06.767 "dma_device_type": 1 00:15:06.767 }, 00:15:06.767 { 00:15:06.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.767 "dma_device_type": 2 00:15:06.767 } 00:15:06.767 ], 00:15:06.767 "driver_specific": {} 00:15:06.767 } 00:15:06.767 ] 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:06.767 [2024-07-24 18:18:15.268618] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:06.767 [2024-07-24 18:18:15.268650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:06.767 [2024-07-24 18:18:15.268662] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:06.767 [2024-07-24 18:18:15.269571] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:06.767 [2024-07-24 18:18:15.269599] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.767 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.027 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.027 "name": "Existed_Raid", 00:15:07.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.027 "strip_size_kb": 64, 00:15:07.027 "state": "configuring", 00:15:07.027 "raid_level": "raid0", 00:15:07.027 "superblock": false, 00:15:07.027 "num_base_bdevs": 4, 00:15:07.027 "num_base_bdevs_discovered": 3, 00:15:07.027 "num_base_bdevs_operational": 4, 00:15:07.027 "base_bdevs_list": [ 00:15:07.027 { 00:15:07.027 "name": "BaseBdev1", 00:15:07.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.027 "is_configured": false, 00:15:07.027 "data_offset": 0, 00:15:07.027 "data_size": 0 00:15:07.027 }, 00:15:07.027 { 00:15:07.027 "name": "BaseBdev2", 00:15:07.027 "uuid": "155e9c19-d125-4aac-a6c7-0daa31aa3fa6", 00:15:07.027 "is_configured": true, 00:15:07.027 "data_offset": 0, 00:15:07.027 "data_size": 65536 00:15:07.027 }, 00:15:07.027 { 00:15:07.027 "name": "BaseBdev3", 00:15:07.027 "uuid": "56e3375e-6781-4cd6-8fa4-51d06e14d3eb", 00:15:07.027 "is_configured": true, 00:15:07.027 "data_offset": 0, 00:15:07.027 "data_size": 65536 00:15:07.027 }, 00:15:07.027 { 00:15:07.027 "name": "BaseBdev4", 00:15:07.027 "uuid": "7b4119f2-789b-481b-8086-fe018fbf7ff6", 00:15:07.027 "is_configured": true, 00:15:07.027 "data_offset": 0, 00:15:07.027 "data_size": 65536 00:15:07.027 } 00:15:07.027 ] 00:15:07.027 }' 00:15:07.027 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.027 18:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.596 18:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:07.596 [2024-07-24 18:18:16.094741] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:07.596 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:07.596 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:07.596 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:07.596 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:07.596 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.596 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:07.596 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.596 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.596 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.596 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.596 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.596 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.856 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.856 "name": "Existed_Raid", 00:15:07.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.856 "strip_size_kb": 64, 00:15:07.856 "state": "configuring", 00:15:07.856 "raid_level": "raid0", 00:15:07.856 "superblock": false, 00:15:07.856 "num_base_bdevs": 4, 00:15:07.856 "num_base_bdevs_discovered": 2, 00:15:07.856 "num_base_bdevs_operational": 4, 00:15:07.856 "base_bdevs_list": [ 00:15:07.856 { 00:15:07.856 "name": "BaseBdev1", 00:15:07.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.856 "is_configured": false, 00:15:07.856 "data_offset": 0, 00:15:07.856 "data_size": 0 00:15:07.856 }, 00:15:07.856 { 00:15:07.856 "name": null, 00:15:07.856 "uuid": "155e9c19-d125-4aac-a6c7-0daa31aa3fa6", 00:15:07.856 "is_configured": false, 00:15:07.856 "data_offset": 0, 00:15:07.856 "data_size": 65536 00:15:07.856 }, 00:15:07.856 { 00:15:07.856 "name": "BaseBdev3", 00:15:07.856 "uuid": "56e3375e-6781-4cd6-8fa4-51d06e14d3eb", 00:15:07.856 "is_configured": true, 00:15:07.856 "data_offset": 0, 00:15:07.856 "data_size": 65536 00:15:07.856 }, 00:15:07.856 { 00:15:07.856 "name": "BaseBdev4", 00:15:07.856 "uuid": "7b4119f2-789b-481b-8086-fe018fbf7ff6", 00:15:07.856 "is_configured": true, 00:15:07.856 "data_offset": 0, 00:15:07.856 "data_size": 65536 00:15:07.856 } 00:15:07.856 ] 00:15:07.856 }' 00:15:07.856 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.856 18:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.444 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:08.444 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.444 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:08.444 18:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:08.702 [2024-07-24 18:18:17.080041] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.702 BaseBdev1 00:15:08.702 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:08.702 18:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:08.702 18:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:08.702 18:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:08.702 18:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:08.702 18:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:08.702 18:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:08.703 18:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:08.961 [ 00:15:08.961 { 00:15:08.961 "name": "BaseBdev1", 00:15:08.961 "aliases": [ 00:15:08.961 "aa05e2f6-834c-4fb0-a863-80bca5234ff6" 00:15:08.961 ], 00:15:08.961 "product_name": "Malloc disk", 00:15:08.961 "block_size": 512, 00:15:08.961 "num_blocks": 65536, 00:15:08.961 "uuid": "aa05e2f6-834c-4fb0-a863-80bca5234ff6", 00:15:08.961 "assigned_rate_limits": { 00:15:08.961 "rw_ios_per_sec": 0, 00:15:08.961 "rw_mbytes_per_sec": 0, 00:15:08.961 "r_mbytes_per_sec": 0, 00:15:08.961 "w_mbytes_per_sec": 0 00:15:08.961 }, 00:15:08.961 "claimed": true, 00:15:08.961 "claim_type": "exclusive_write", 00:15:08.961 "zoned": false, 00:15:08.961 "supported_io_types": { 00:15:08.961 "read": true, 00:15:08.961 "write": true, 00:15:08.961 "unmap": true, 00:15:08.961 "flush": true, 00:15:08.961 "reset": true, 00:15:08.961 "nvme_admin": false, 00:15:08.961 "nvme_io": false, 00:15:08.961 "nvme_io_md": false, 00:15:08.961 "write_zeroes": true, 00:15:08.961 "zcopy": true, 00:15:08.961 "get_zone_info": false, 00:15:08.961 "zone_management": false, 00:15:08.961 "zone_append": false, 00:15:08.961 "compare": false, 00:15:08.961 "compare_and_write": false, 00:15:08.961 "abort": true, 00:15:08.961 "seek_hole": false, 00:15:08.961 "seek_data": false, 00:15:08.961 "copy": true, 00:15:08.961 "nvme_iov_md": false 00:15:08.961 }, 00:15:08.961 "memory_domains": [ 00:15:08.961 { 00:15:08.961 "dma_device_id": "system", 00:15:08.961 "dma_device_type": 1 00:15:08.961 }, 00:15:08.961 { 00:15:08.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.961 "dma_device_type": 2 00:15:08.961 } 00:15:08.961 ], 00:15:08.961 "driver_specific": {} 00:15:08.961 } 00:15:08.962 ] 00:15:08.962 18:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:08.962 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:08.962 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:08.962 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:08.962 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:08.962 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:08.962 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:08.962 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:08.962 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:08.962 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:08.962 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:08.962 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.962 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.221 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.221 "name": "Existed_Raid", 00:15:09.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.221 "strip_size_kb": 64, 00:15:09.221 "state": "configuring", 00:15:09.221 "raid_level": "raid0", 00:15:09.221 "superblock": false, 00:15:09.221 "num_base_bdevs": 4, 00:15:09.221 "num_base_bdevs_discovered": 3, 00:15:09.221 "num_base_bdevs_operational": 4, 00:15:09.221 "base_bdevs_list": [ 00:15:09.221 { 00:15:09.221 "name": "BaseBdev1", 00:15:09.221 "uuid": "aa05e2f6-834c-4fb0-a863-80bca5234ff6", 00:15:09.221 "is_configured": true, 00:15:09.221 "data_offset": 0, 00:15:09.221 "data_size": 65536 00:15:09.221 }, 00:15:09.221 { 00:15:09.221 "name": null, 00:15:09.221 "uuid": "155e9c19-d125-4aac-a6c7-0daa31aa3fa6", 00:15:09.221 "is_configured": false, 00:15:09.221 "data_offset": 0, 00:15:09.221 "data_size": 65536 00:15:09.221 }, 00:15:09.221 { 00:15:09.221 "name": "BaseBdev3", 00:15:09.221 "uuid": "56e3375e-6781-4cd6-8fa4-51d06e14d3eb", 00:15:09.221 "is_configured": true, 00:15:09.221 "data_offset": 0, 00:15:09.221 "data_size": 65536 00:15:09.221 }, 00:15:09.221 { 00:15:09.221 "name": "BaseBdev4", 00:15:09.221 "uuid": "7b4119f2-789b-481b-8086-fe018fbf7ff6", 00:15:09.221 "is_configured": true, 00:15:09.221 "data_offset": 0, 00:15:09.221 "data_size": 65536 00:15:09.221 } 00:15:09.221 ] 00:15:09.221 }' 00:15:09.221 18:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.221 18:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.479 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:09.479 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.739 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:09.739 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:09.998 [2024-07-24 18:18:18.391450] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.998 "name": "Existed_Raid", 00:15:09.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.998 "strip_size_kb": 64, 00:15:09.998 "state": "configuring", 00:15:09.998 "raid_level": "raid0", 00:15:09.998 "superblock": false, 00:15:09.998 "num_base_bdevs": 4, 00:15:09.998 "num_base_bdevs_discovered": 2, 00:15:09.998 "num_base_bdevs_operational": 4, 00:15:09.998 "base_bdevs_list": [ 00:15:09.998 { 00:15:09.998 "name": "BaseBdev1", 00:15:09.998 "uuid": "aa05e2f6-834c-4fb0-a863-80bca5234ff6", 00:15:09.998 "is_configured": true, 00:15:09.998 "data_offset": 0, 00:15:09.998 "data_size": 65536 00:15:09.998 }, 00:15:09.998 { 00:15:09.998 "name": null, 00:15:09.998 "uuid": "155e9c19-d125-4aac-a6c7-0daa31aa3fa6", 00:15:09.998 "is_configured": false, 00:15:09.998 "data_offset": 0, 00:15:09.998 "data_size": 65536 00:15:09.998 }, 00:15:09.998 { 00:15:09.998 "name": null, 00:15:09.998 "uuid": "56e3375e-6781-4cd6-8fa4-51d06e14d3eb", 00:15:09.998 "is_configured": false, 00:15:09.998 "data_offset": 0, 00:15:09.998 "data_size": 65536 00:15:09.998 }, 00:15:09.998 { 00:15:09.998 "name": "BaseBdev4", 00:15:09.998 "uuid": "7b4119f2-789b-481b-8086-fe018fbf7ff6", 00:15:09.998 "is_configured": true, 00:15:09.998 "data_offset": 0, 00:15:09.998 "data_size": 65536 00:15:09.998 } 00:15:09.998 ] 00:15:09.998 }' 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.998 18:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.566 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.566 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:10.825 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:10.825 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:10.825 [2024-07-24 18:18:19.406077] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.825 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:10.825 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:10.825 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:10.825 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:11.087 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.087 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:11.087 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.087 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.087 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.087 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.087 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.087 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.087 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.087 "name": "Existed_Raid", 00:15:11.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.087 "strip_size_kb": 64, 00:15:11.087 "state": "configuring", 00:15:11.087 "raid_level": "raid0", 00:15:11.087 "superblock": false, 00:15:11.087 "num_base_bdevs": 4, 00:15:11.087 "num_base_bdevs_discovered": 3, 00:15:11.087 "num_base_bdevs_operational": 4, 00:15:11.087 "base_bdevs_list": [ 00:15:11.087 { 00:15:11.087 "name": "BaseBdev1", 00:15:11.087 "uuid": "aa05e2f6-834c-4fb0-a863-80bca5234ff6", 00:15:11.087 "is_configured": true, 00:15:11.087 "data_offset": 0, 00:15:11.087 "data_size": 65536 00:15:11.087 }, 00:15:11.087 { 00:15:11.087 "name": null, 00:15:11.087 "uuid": "155e9c19-d125-4aac-a6c7-0daa31aa3fa6", 00:15:11.087 "is_configured": false, 00:15:11.087 "data_offset": 0, 00:15:11.087 "data_size": 65536 00:15:11.087 }, 00:15:11.087 { 00:15:11.087 "name": "BaseBdev3", 00:15:11.087 "uuid": "56e3375e-6781-4cd6-8fa4-51d06e14d3eb", 00:15:11.087 "is_configured": true, 00:15:11.087 "data_offset": 0, 00:15:11.087 "data_size": 65536 00:15:11.087 }, 00:15:11.087 { 00:15:11.087 "name": "BaseBdev4", 00:15:11.087 "uuid": "7b4119f2-789b-481b-8086-fe018fbf7ff6", 00:15:11.087 "is_configured": true, 00:15:11.087 "data_offset": 0, 00:15:11.087 "data_size": 65536 00:15:11.087 } 00:15:11.087 ] 00:15:11.087 }' 00:15:11.087 18:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.087 18:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.689 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.689 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:11.689 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:11.689 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:11.948 [2024-07-24 18:18:20.388640] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:11.948 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:11.948 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:11.948 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:11.948 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:11.948 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.948 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:11.948 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.948 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.948 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.948 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.948 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.948 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.208 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.208 "name": "Existed_Raid", 00:15:12.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.208 "strip_size_kb": 64, 00:15:12.208 "state": "configuring", 00:15:12.208 "raid_level": "raid0", 00:15:12.208 "superblock": false, 00:15:12.208 "num_base_bdevs": 4, 00:15:12.208 "num_base_bdevs_discovered": 2, 00:15:12.208 "num_base_bdevs_operational": 4, 00:15:12.208 "base_bdevs_list": [ 00:15:12.208 { 00:15:12.208 "name": null, 00:15:12.208 "uuid": "aa05e2f6-834c-4fb0-a863-80bca5234ff6", 00:15:12.208 "is_configured": false, 00:15:12.208 "data_offset": 0, 00:15:12.208 "data_size": 65536 00:15:12.208 }, 00:15:12.208 { 00:15:12.208 "name": null, 00:15:12.208 "uuid": "155e9c19-d125-4aac-a6c7-0daa31aa3fa6", 00:15:12.208 "is_configured": false, 00:15:12.208 "data_offset": 0, 00:15:12.208 "data_size": 65536 00:15:12.208 }, 00:15:12.208 { 00:15:12.208 "name": "BaseBdev3", 00:15:12.208 "uuid": "56e3375e-6781-4cd6-8fa4-51d06e14d3eb", 00:15:12.208 "is_configured": true, 00:15:12.208 "data_offset": 0, 00:15:12.208 "data_size": 65536 00:15:12.208 }, 00:15:12.208 { 00:15:12.208 "name": "BaseBdev4", 00:15:12.208 "uuid": "7b4119f2-789b-481b-8086-fe018fbf7ff6", 00:15:12.208 "is_configured": true, 00:15:12.208 "data_offset": 0, 00:15:12.208 "data_size": 65536 00:15:12.208 } 00:15:12.208 ] 00:15:12.208 }' 00:15:12.208 18:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.208 18:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.466 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:12.466 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.726 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:12.726 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:12.985 [2024-07-24 18:18:21.376897] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.985 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:12.985 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:12.985 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:12.985 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:12.985 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.985 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:12.985 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.985 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.985 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.985 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.985 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.985 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.985 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.985 "name": "Existed_Raid", 00:15:12.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.985 "strip_size_kb": 64, 00:15:12.985 "state": "configuring", 00:15:12.985 "raid_level": "raid0", 00:15:12.985 "superblock": false, 00:15:12.985 "num_base_bdevs": 4, 00:15:12.985 "num_base_bdevs_discovered": 3, 00:15:12.985 "num_base_bdevs_operational": 4, 00:15:12.985 "base_bdevs_list": [ 00:15:12.985 { 00:15:12.985 "name": null, 00:15:12.985 "uuid": "aa05e2f6-834c-4fb0-a863-80bca5234ff6", 00:15:12.985 "is_configured": false, 00:15:12.985 "data_offset": 0, 00:15:12.985 "data_size": 65536 00:15:12.985 }, 00:15:12.985 { 00:15:12.985 "name": "BaseBdev2", 00:15:12.985 "uuid": "155e9c19-d125-4aac-a6c7-0daa31aa3fa6", 00:15:12.986 "is_configured": true, 00:15:12.986 "data_offset": 0, 00:15:12.986 "data_size": 65536 00:15:12.986 }, 00:15:12.986 { 00:15:12.986 "name": "BaseBdev3", 00:15:12.986 "uuid": "56e3375e-6781-4cd6-8fa4-51d06e14d3eb", 00:15:12.986 "is_configured": true, 00:15:12.986 "data_offset": 0, 00:15:12.986 "data_size": 65536 00:15:12.986 }, 00:15:12.986 { 00:15:12.986 "name": "BaseBdev4", 00:15:12.986 "uuid": "7b4119f2-789b-481b-8086-fe018fbf7ff6", 00:15:12.986 "is_configured": true, 00:15:12.986 "data_offset": 0, 00:15:12.986 "data_size": 65536 00:15:12.986 } 00:15:12.986 ] 00:15:12.986 }' 00:15:12.986 18:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.986 18:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.554 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.554 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:13.813 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:13.813 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.813 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:13.813 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u aa05e2f6-834c-4fb0-a863-80bca5234ff6 00:15:14.073 [2024-07-24 18:18:22.534709] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:14.073 [2024-07-24 18:18:22.534736] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x169cdc0 00:15:14.073 [2024-07-24 18:18:22.534742] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:14.073 [2024-07-24 18:18:22.534873] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16a1a40 00:15:14.073 [2024-07-24 18:18:22.534954] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x169cdc0 00:15:14.073 [2024-07-24 18:18:22.534961] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x169cdc0 00:15:14.073 [2024-07-24 18:18:22.535081] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.073 NewBaseBdev 00:15:14.073 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:14.073 18:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:14.073 18:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:14.073 18:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:14.073 18:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:14.073 18:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:14.073 18:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:14.332 18:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:14.332 [ 00:15:14.332 { 00:15:14.332 "name": "NewBaseBdev", 00:15:14.332 "aliases": [ 00:15:14.332 "aa05e2f6-834c-4fb0-a863-80bca5234ff6" 00:15:14.332 ], 00:15:14.332 "product_name": "Malloc disk", 00:15:14.332 "block_size": 512, 00:15:14.332 "num_blocks": 65536, 00:15:14.332 "uuid": "aa05e2f6-834c-4fb0-a863-80bca5234ff6", 00:15:14.332 "assigned_rate_limits": { 00:15:14.332 "rw_ios_per_sec": 0, 00:15:14.332 "rw_mbytes_per_sec": 0, 00:15:14.332 "r_mbytes_per_sec": 0, 00:15:14.332 "w_mbytes_per_sec": 0 00:15:14.332 }, 00:15:14.332 "claimed": true, 00:15:14.332 "claim_type": "exclusive_write", 00:15:14.332 "zoned": false, 00:15:14.332 "supported_io_types": { 00:15:14.332 "read": true, 00:15:14.332 "write": true, 00:15:14.332 "unmap": true, 00:15:14.332 "flush": true, 00:15:14.332 "reset": true, 00:15:14.332 "nvme_admin": false, 00:15:14.332 "nvme_io": false, 00:15:14.332 "nvme_io_md": false, 00:15:14.332 "write_zeroes": true, 00:15:14.332 "zcopy": true, 00:15:14.332 "get_zone_info": false, 00:15:14.332 "zone_management": false, 00:15:14.332 "zone_append": false, 00:15:14.332 "compare": false, 00:15:14.332 "compare_and_write": false, 00:15:14.332 "abort": true, 00:15:14.332 "seek_hole": false, 00:15:14.332 "seek_data": false, 00:15:14.332 "copy": true, 00:15:14.332 "nvme_iov_md": false 00:15:14.332 }, 00:15:14.332 "memory_domains": [ 00:15:14.332 { 00:15:14.332 "dma_device_id": "system", 00:15:14.332 "dma_device_type": 1 00:15:14.332 }, 00:15:14.332 { 00:15:14.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.332 "dma_device_type": 2 00:15:14.332 } 00:15:14.332 ], 00:15:14.332 "driver_specific": {} 00:15:14.332 } 00:15:14.332 ] 00:15:14.332 18:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:14.332 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:14.332 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.332 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:14.332 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:14.332 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.332 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:14.333 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.333 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.333 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.333 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.333 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.333 18:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.592 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.592 "name": "Existed_Raid", 00:15:14.592 "uuid": "8d28cf06-51c8-40da-b91f-1b2f14962336", 00:15:14.592 "strip_size_kb": 64, 00:15:14.592 "state": "online", 00:15:14.592 "raid_level": "raid0", 00:15:14.592 "superblock": false, 00:15:14.592 "num_base_bdevs": 4, 00:15:14.592 "num_base_bdevs_discovered": 4, 00:15:14.592 "num_base_bdevs_operational": 4, 00:15:14.592 "base_bdevs_list": [ 00:15:14.592 { 00:15:14.592 "name": "NewBaseBdev", 00:15:14.592 "uuid": "aa05e2f6-834c-4fb0-a863-80bca5234ff6", 00:15:14.592 "is_configured": true, 00:15:14.592 "data_offset": 0, 00:15:14.592 "data_size": 65536 00:15:14.592 }, 00:15:14.592 { 00:15:14.592 "name": "BaseBdev2", 00:15:14.592 "uuid": "155e9c19-d125-4aac-a6c7-0daa31aa3fa6", 00:15:14.592 "is_configured": true, 00:15:14.592 "data_offset": 0, 00:15:14.592 "data_size": 65536 00:15:14.592 }, 00:15:14.592 { 00:15:14.592 "name": "BaseBdev3", 00:15:14.592 "uuid": "56e3375e-6781-4cd6-8fa4-51d06e14d3eb", 00:15:14.592 "is_configured": true, 00:15:14.592 "data_offset": 0, 00:15:14.592 "data_size": 65536 00:15:14.592 }, 00:15:14.592 { 00:15:14.592 "name": "BaseBdev4", 00:15:14.592 "uuid": "7b4119f2-789b-481b-8086-fe018fbf7ff6", 00:15:14.592 "is_configured": true, 00:15:14.592 "data_offset": 0, 00:15:14.592 "data_size": 65536 00:15:14.592 } 00:15:14.592 ] 00:15:14.592 }' 00:15:14.592 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.592 18:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.161 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:15.161 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:15.161 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:15.161 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:15.161 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:15.161 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:15.161 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:15.161 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:15.161 [2024-07-24 18:18:23.657801] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.161 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:15.161 "name": "Existed_Raid", 00:15:15.161 "aliases": [ 00:15:15.161 "8d28cf06-51c8-40da-b91f-1b2f14962336" 00:15:15.161 ], 00:15:15.161 "product_name": "Raid Volume", 00:15:15.161 "block_size": 512, 00:15:15.161 "num_blocks": 262144, 00:15:15.161 "uuid": "8d28cf06-51c8-40da-b91f-1b2f14962336", 00:15:15.161 "assigned_rate_limits": { 00:15:15.161 "rw_ios_per_sec": 0, 00:15:15.161 "rw_mbytes_per_sec": 0, 00:15:15.161 "r_mbytes_per_sec": 0, 00:15:15.161 "w_mbytes_per_sec": 0 00:15:15.161 }, 00:15:15.161 "claimed": false, 00:15:15.161 "zoned": false, 00:15:15.161 "supported_io_types": { 00:15:15.161 "read": true, 00:15:15.161 "write": true, 00:15:15.161 "unmap": true, 00:15:15.161 "flush": true, 00:15:15.161 "reset": true, 00:15:15.161 "nvme_admin": false, 00:15:15.161 "nvme_io": false, 00:15:15.161 "nvme_io_md": false, 00:15:15.161 "write_zeroes": true, 00:15:15.161 "zcopy": false, 00:15:15.161 "get_zone_info": false, 00:15:15.161 "zone_management": false, 00:15:15.161 "zone_append": false, 00:15:15.161 "compare": false, 00:15:15.161 "compare_and_write": false, 00:15:15.161 "abort": false, 00:15:15.161 "seek_hole": false, 00:15:15.161 "seek_data": false, 00:15:15.161 "copy": false, 00:15:15.161 "nvme_iov_md": false 00:15:15.161 }, 00:15:15.161 "memory_domains": [ 00:15:15.161 { 00:15:15.161 "dma_device_id": "system", 00:15:15.161 "dma_device_type": 1 00:15:15.161 }, 00:15:15.161 { 00:15:15.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.161 "dma_device_type": 2 00:15:15.161 }, 00:15:15.161 { 00:15:15.161 "dma_device_id": "system", 00:15:15.161 "dma_device_type": 1 00:15:15.161 }, 00:15:15.161 { 00:15:15.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.161 "dma_device_type": 2 00:15:15.161 }, 00:15:15.161 { 00:15:15.161 "dma_device_id": "system", 00:15:15.161 "dma_device_type": 1 00:15:15.161 }, 00:15:15.161 { 00:15:15.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.161 "dma_device_type": 2 00:15:15.161 }, 00:15:15.161 { 00:15:15.161 "dma_device_id": "system", 00:15:15.161 "dma_device_type": 1 00:15:15.161 }, 00:15:15.161 { 00:15:15.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.161 "dma_device_type": 2 00:15:15.161 } 00:15:15.161 ], 00:15:15.161 "driver_specific": { 00:15:15.161 "raid": { 00:15:15.161 "uuid": "8d28cf06-51c8-40da-b91f-1b2f14962336", 00:15:15.161 "strip_size_kb": 64, 00:15:15.162 "state": "online", 00:15:15.162 "raid_level": "raid0", 00:15:15.162 "superblock": false, 00:15:15.162 "num_base_bdevs": 4, 00:15:15.162 "num_base_bdevs_discovered": 4, 00:15:15.162 "num_base_bdevs_operational": 4, 00:15:15.162 "base_bdevs_list": [ 00:15:15.162 { 00:15:15.162 "name": "NewBaseBdev", 00:15:15.162 "uuid": "aa05e2f6-834c-4fb0-a863-80bca5234ff6", 00:15:15.162 "is_configured": true, 00:15:15.162 "data_offset": 0, 00:15:15.162 "data_size": 65536 00:15:15.162 }, 00:15:15.162 { 00:15:15.162 "name": "BaseBdev2", 00:15:15.162 "uuid": "155e9c19-d125-4aac-a6c7-0daa31aa3fa6", 00:15:15.162 "is_configured": true, 00:15:15.162 "data_offset": 0, 00:15:15.162 "data_size": 65536 00:15:15.162 }, 00:15:15.162 { 00:15:15.162 "name": "BaseBdev3", 00:15:15.162 "uuid": "56e3375e-6781-4cd6-8fa4-51d06e14d3eb", 00:15:15.162 "is_configured": true, 00:15:15.162 "data_offset": 0, 00:15:15.162 "data_size": 65536 00:15:15.162 }, 00:15:15.162 { 00:15:15.162 "name": "BaseBdev4", 00:15:15.162 "uuid": "7b4119f2-789b-481b-8086-fe018fbf7ff6", 00:15:15.162 "is_configured": true, 00:15:15.162 "data_offset": 0, 00:15:15.162 "data_size": 65536 00:15:15.162 } 00:15:15.162 ] 00:15:15.162 } 00:15:15.162 } 00:15:15.162 }' 00:15:15.162 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:15.162 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:15.162 BaseBdev2 00:15:15.162 BaseBdev3 00:15:15.162 BaseBdev4' 00:15:15.162 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:15.162 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:15.162 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:15.421 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:15.421 "name": "NewBaseBdev", 00:15:15.421 "aliases": [ 00:15:15.421 "aa05e2f6-834c-4fb0-a863-80bca5234ff6" 00:15:15.421 ], 00:15:15.421 "product_name": "Malloc disk", 00:15:15.421 "block_size": 512, 00:15:15.421 "num_blocks": 65536, 00:15:15.421 "uuid": "aa05e2f6-834c-4fb0-a863-80bca5234ff6", 00:15:15.421 "assigned_rate_limits": { 00:15:15.421 "rw_ios_per_sec": 0, 00:15:15.421 "rw_mbytes_per_sec": 0, 00:15:15.421 "r_mbytes_per_sec": 0, 00:15:15.421 "w_mbytes_per_sec": 0 00:15:15.421 }, 00:15:15.421 "claimed": true, 00:15:15.421 "claim_type": "exclusive_write", 00:15:15.421 "zoned": false, 00:15:15.422 "supported_io_types": { 00:15:15.422 "read": true, 00:15:15.422 "write": true, 00:15:15.422 "unmap": true, 00:15:15.422 "flush": true, 00:15:15.422 "reset": true, 00:15:15.422 "nvme_admin": false, 00:15:15.422 "nvme_io": false, 00:15:15.422 "nvme_io_md": false, 00:15:15.422 "write_zeroes": true, 00:15:15.422 "zcopy": true, 00:15:15.422 "get_zone_info": false, 00:15:15.422 "zone_management": false, 00:15:15.422 "zone_append": false, 00:15:15.422 "compare": false, 00:15:15.422 "compare_and_write": false, 00:15:15.422 "abort": true, 00:15:15.422 "seek_hole": false, 00:15:15.422 "seek_data": false, 00:15:15.422 "copy": true, 00:15:15.422 "nvme_iov_md": false 00:15:15.422 }, 00:15:15.422 "memory_domains": [ 00:15:15.422 { 00:15:15.422 "dma_device_id": "system", 00:15:15.422 "dma_device_type": 1 00:15:15.422 }, 00:15:15.422 { 00:15:15.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.422 "dma_device_type": 2 00:15:15.422 } 00:15:15.422 ], 00:15:15.422 "driver_specific": {} 00:15:15.422 }' 00:15:15.422 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:15.422 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:15.422 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:15.422 18:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:15.422 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:15.681 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:15.681 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:15.681 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:15.681 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:15.681 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:15.681 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:15.681 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:15.681 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:15.681 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:15.681 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:15.942 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:15.942 "name": "BaseBdev2", 00:15:15.942 "aliases": [ 00:15:15.942 "155e9c19-d125-4aac-a6c7-0daa31aa3fa6" 00:15:15.942 ], 00:15:15.942 "product_name": "Malloc disk", 00:15:15.942 "block_size": 512, 00:15:15.942 "num_blocks": 65536, 00:15:15.942 "uuid": "155e9c19-d125-4aac-a6c7-0daa31aa3fa6", 00:15:15.942 "assigned_rate_limits": { 00:15:15.942 "rw_ios_per_sec": 0, 00:15:15.942 "rw_mbytes_per_sec": 0, 00:15:15.942 "r_mbytes_per_sec": 0, 00:15:15.942 "w_mbytes_per_sec": 0 00:15:15.942 }, 00:15:15.942 "claimed": true, 00:15:15.942 "claim_type": "exclusive_write", 00:15:15.942 "zoned": false, 00:15:15.942 "supported_io_types": { 00:15:15.942 "read": true, 00:15:15.942 "write": true, 00:15:15.942 "unmap": true, 00:15:15.942 "flush": true, 00:15:15.942 "reset": true, 00:15:15.942 "nvme_admin": false, 00:15:15.942 "nvme_io": false, 00:15:15.942 "nvme_io_md": false, 00:15:15.942 "write_zeroes": true, 00:15:15.942 "zcopy": true, 00:15:15.942 "get_zone_info": false, 00:15:15.942 "zone_management": false, 00:15:15.942 "zone_append": false, 00:15:15.942 "compare": false, 00:15:15.942 "compare_and_write": false, 00:15:15.942 "abort": true, 00:15:15.942 "seek_hole": false, 00:15:15.942 "seek_data": false, 00:15:15.942 "copy": true, 00:15:15.942 "nvme_iov_md": false 00:15:15.942 }, 00:15:15.942 "memory_domains": [ 00:15:15.942 { 00:15:15.942 "dma_device_id": "system", 00:15:15.942 "dma_device_type": 1 00:15:15.942 }, 00:15:15.942 { 00:15:15.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.942 "dma_device_type": 2 00:15:15.942 } 00:15:15.942 ], 00:15:15.942 "driver_specific": {} 00:15:15.942 }' 00:15:15.942 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:15.942 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:15.942 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:15.942 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:15.942 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:15.942 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:15.942 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.202 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.202 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:16.202 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.202 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.202 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:16.202 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:16.202 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:16.202 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:16.462 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:16.462 "name": "BaseBdev3", 00:15:16.462 "aliases": [ 00:15:16.462 "56e3375e-6781-4cd6-8fa4-51d06e14d3eb" 00:15:16.462 ], 00:15:16.462 "product_name": "Malloc disk", 00:15:16.462 "block_size": 512, 00:15:16.462 "num_blocks": 65536, 00:15:16.462 "uuid": "56e3375e-6781-4cd6-8fa4-51d06e14d3eb", 00:15:16.462 "assigned_rate_limits": { 00:15:16.462 "rw_ios_per_sec": 0, 00:15:16.462 "rw_mbytes_per_sec": 0, 00:15:16.462 "r_mbytes_per_sec": 0, 00:15:16.462 "w_mbytes_per_sec": 0 00:15:16.462 }, 00:15:16.462 "claimed": true, 00:15:16.462 "claim_type": "exclusive_write", 00:15:16.462 "zoned": false, 00:15:16.462 "supported_io_types": { 00:15:16.462 "read": true, 00:15:16.462 "write": true, 00:15:16.462 "unmap": true, 00:15:16.462 "flush": true, 00:15:16.462 "reset": true, 00:15:16.462 "nvme_admin": false, 00:15:16.462 "nvme_io": false, 00:15:16.462 "nvme_io_md": false, 00:15:16.462 "write_zeroes": true, 00:15:16.462 "zcopy": true, 00:15:16.462 "get_zone_info": false, 00:15:16.462 "zone_management": false, 00:15:16.462 "zone_append": false, 00:15:16.462 "compare": false, 00:15:16.462 "compare_and_write": false, 00:15:16.462 "abort": true, 00:15:16.462 "seek_hole": false, 00:15:16.462 "seek_data": false, 00:15:16.462 "copy": true, 00:15:16.462 "nvme_iov_md": false 00:15:16.462 }, 00:15:16.462 "memory_domains": [ 00:15:16.462 { 00:15:16.462 "dma_device_id": "system", 00:15:16.462 "dma_device_type": 1 00:15:16.462 }, 00:15:16.462 { 00:15:16.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.462 "dma_device_type": 2 00:15:16.462 } 00:15:16.462 ], 00:15:16.462 "driver_specific": {} 00:15:16.462 }' 00:15:16.462 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:16.462 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:16.462 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:16.462 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.462 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.462 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:16.462 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.462 18:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.462 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:16.462 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.722 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.722 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:16.722 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:16.722 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:16.722 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:16.722 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:16.722 "name": "BaseBdev4", 00:15:16.722 "aliases": [ 00:15:16.722 "7b4119f2-789b-481b-8086-fe018fbf7ff6" 00:15:16.722 ], 00:15:16.722 "product_name": "Malloc disk", 00:15:16.722 "block_size": 512, 00:15:16.722 "num_blocks": 65536, 00:15:16.722 "uuid": "7b4119f2-789b-481b-8086-fe018fbf7ff6", 00:15:16.722 "assigned_rate_limits": { 00:15:16.722 "rw_ios_per_sec": 0, 00:15:16.722 "rw_mbytes_per_sec": 0, 00:15:16.722 "r_mbytes_per_sec": 0, 00:15:16.722 "w_mbytes_per_sec": 0 00:15:16.722 }, 00:15:16.722 "claimed": true, 00:15:16.722 "claim_type": "exclusive_write", 00:15:16.722 "zoned": false, 00:15:16.722 "supported_io_types": { 00:15:16.722 "read": true, 00:15:16.722 "write": true, 00:15:16.722 "unmap": true, 00:15:16.722 "flush": true, 00:15:16.722 "reset": true, 00:15:16.722 "nvme_admin": false, 00:15:16.722 "nvme_io": false, 00:15:16.722 "nvme_io_md": false, 00:15:16.722 "write_zeroes": true, 00:15:16.722 "zcopy": true, 00:15:16.722 "get_zone_info": false, 00:15:16.722 "zone_management": false, 00:15:16.722 "zone_append": false, 00:15:16.722 "compare": false, 00:15:16.722 "compare_and_write": false, 00:15:16.722 "abort": true, 00:15:16.722 "seek_hole": false, 00:15:16.722 "seek_data": false, 00:15:16.722 "copy": true, 00:15:16.722 "nvme_iov_md": false 00:15:16.722 }, 00:15:16.722 "memory_domains": [ 00:15:16.722 { 00:15:16.722 "dma_device_id": "system", 00:15:16.722 "dma_device_type": 1 00:15:16.722 }, 00:15:16.722 { 00:15:16.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.722 "dma_device_type": 2 00:15:16.722 } 00:15:16.722 ], 00:15:16.722 "driver_specific": {} 00:15:16.722 }' 00:15:16.722 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:16.982 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:16.982 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:16.982 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.982 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.982 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:16.982 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.982 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.982 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:16.982 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.982 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:17.242 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:17.242 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:17.242 [2024-07-24 18:18:25.747001] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:17.242 [2024-07-24 18:18:25.747021] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.242 [2024-07-24 18:18:25.747061] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.242 [2024-07-24 18:18:25.747109] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.242 [2024-07-24 18:18:25.747121] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x169cdc0 name Existed_Raid, state offline 00:15:17.242 18:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2211695 00:15:17.242 18:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2211695 ']' 00:15:17.242 18:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2211695 00:15:17.242 18:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:17.242 18:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:17.242 18:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2211695 00:15:17.242 18:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:17.242 18:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:17.242 18:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2211695' 00:15:17.242 killing process with pid 2211695 00:15:17.242 18:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2211695 00:15:17.242 [2024-07-24 18:18:25.816204] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.242 18:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2211695 00:15:17.501 [2024-07-24 18:18:25.848612] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.501 18:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:17.501 00:15:17.501 real 0m24.316s 00:15:17.501 user 0m44.376s 00:15:17.501 sys 0m4.719s 00:15:17.501 18:18:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:17.501 18:18:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.501 ************************************ 00:15:17.501 END TEST raid_state_function_test 00:15:17.501 ************************************ 00:15:17.501 18:18:26 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:15:17.501 18:18:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:17.501 18:18:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:17.501 18:18:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.761 ************************************ 00:15:17.761 START TEST raid_state_function_test_sb 00:15:17.761 ************************************ 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2217076 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2217076' 00:15:17.761 Process raid pid: 2217076 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2217076 /var/tmp/spdk-raid.sock 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2217076 ']' 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:17.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:17.761 18:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.761 [2024-07-24 18:18:26.158599] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:15:17.762 [2024-07-24 18:18:26.158654] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:01.0 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:01.1 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:01.2 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:01.3 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:01.4 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:01.5 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:01.6 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:01.7 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:02.0 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:02.1 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:02.2 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:02.3 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:02.4 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:02.5 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:02.6 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b3:02.7 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:01.0 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:01.1 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:01.2 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:01.3 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:01.4 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:01.5 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:01.6 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:01.7 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:02.0 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:02.1 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:02.2 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:02.3 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:02.4 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:02.5 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:02.6 cannot be used 00:15:17.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:17.762 EAL: Requested device 0000:b5:02.7 cannot be used 00:15:17.762 [2024-07-24 18:18:26.252036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.762 [2024-07-24 18:18:26.325477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.021 [2024-07-24 18:18:26.382225] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.021 [2024-07-24 18:18:26.382245] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.589 18:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:18.589 18:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:18.589 18:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:18.589 [2024-07-24 18:18:27.089581] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.589 [2024-07-24 18:18:27.089611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.589 [2024-07-24 18:18:27.089619] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.589 [2024-07-24 18:18:27.089632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.589 [2024-07-24 18:18:27.089638] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:18.589 [2024-07-24 18:18:27.089646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:18.589 [2024-07-24 18:18:27.089654] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:18.589 [2024-07-24 18:18:27.089661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:18.589 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:18.589 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.589 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:18.589 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:18.589 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.589 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:18.589 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.589 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.589 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.589 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.589 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.589 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.848 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.848 "name": "Existed_Raid", 00:15:18.848 "uuid": "4f64444b-f5ec-4064-802f-f67101da6948", 00:15:18.848 "strip_size_kb": 64, 00:15:18.848 "state": "configuring", 00:15:18.848 "raid_level": "raid0", 00:15:18.848 "superblock": true, 00:15:18.848 "num_base_bdevs": 4, 00:15:18.848 "num_base_bdevs_discovered": 0, 00:15:18.848 "num_base_bdevs_operational": 4, 00:15:18.848 "base_bdevs_list": [ 00:15:18.848 { 00:15:18.848 "name": "BaseBdev1", 00:15:18.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.848 "is_configured": false, 00:15:18.848 "data_offset": 0, 00:15:18.848 "data_size": 0 00:15:18.848 }, 00:15:18.848 { 00:15:18.848 "name": "BaseBdev2", 00:15:18.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.848 "is_configured": false, 00:15:18.848 "data_offset": 0, 00:15:18.848 "data_size": 0 00:15:18.848 }, 00:15:18.848 { 00:15:18.848 "name": "BaseBdev3", 00:15:18.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.848 "is_configured": false, 00:15:18.848 "data_offset": 0, 00:15:18.848 "data_size": 0 00:15:18.848 }, 00:15:18.848 { 00:15:18.848 "name": "BaseBdev4", 00:15:18.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.848 "is_configured": false, 00:15:18.848 "data_offset": 0, 00:15:18.848 "data_size": 0 00:15:18.848 } 00:15:18.848 ] 00:15:18.848 }' 00:15:18.848 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.848 18:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.416 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:19.416 [2024-07-24 18:18:27.891538] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.416 [2024-07-24 18:18:27.891555] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20241e0 name Existed_Raid, state configuring 00:15:19.416 18:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:19.675 [2024-07-24 18:18:28.064000] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.675 [2024-07-24 18:18:28.064015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.675 [2024-07-24 18:18:28.064021] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.675 [2024-07-24 18:18:28.064028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.675 [2024-07-24 18:18:28.064037] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.675 [2024-07-24 18:18:28.064044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.675 [2024-07-24 18:18:28.064049] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:19.675 [2024-07-24 18:18:28.064056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:19.675 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.675 [2024-07-24 18:18:28.240889] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.675 BaseBdev1 00:15:19.675 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:19.675 18:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:19.675 18:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:19.676 18:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:19.676 18:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:19.676 18:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:19.676 18:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.934 18:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:20.193 [ 00:15:20.193 { 00:15:20.193 "name": "BaseBdev1", 00:15:20.193 "aliases": [ 00:15:20.193 "51fda6b0-0a06-4e4e-9990-4871985a21bd" 00:15:20.193 ], 00:15:20.193 "product_name": "Malloc disk", 00:15:20.193 "block_size": 512, 00:15:20.193 "num_blocks": 65536, 00:15:20.193 "uuid": "51fda6b0-0a06-4e4e-9990-4871985a21bd", 00:15:20.193 "assigned_rate_limits": { 00:15:20.193 "rw_ios_per_sec": 0, 00:15:20.193 "rw_mbytes_per_sec": 0, 00:15:20.193 "r_mbytes_per_sec": 0, 00:15:20.193 "w_mbytes_per_sec": 0 00:15:20.193 }, 00:15:20.193 "claimed": true, 00:15:20.193 "claim_type": "exclusive_write", 00:15:20.193 "zoned": false, 00:15:20.193 "supported_io_types": { 00:15:20.193 "read": true, 00:15:20.193 "write": true, 00:15:20.193 "unmap": true, 00:15:20.193 "flush": true, 00:15:20.193 "reset": true, 00:15:20.193 "nvme_admin": false, 00:15:20.193 "nvme_io": false, 00:15:20.193 "nvme_io_md": false, 00:15:20.193 "write_zeroes": true, 00:15:20.193 "zcopy": true, 00:15:20.193 "get_zone_info": false, 00:15:20.193 "zone_management": false, 00:15:20.193 "zone_append": false, 00:15:20.193 "compare": false, 00:15:20.193 "compare_and_write": false, 00:15:20.193 "abort": true, 00:15:20.193 "seek_hole": false, 00:15:20.193 "seek_data": false, 00:15:20.194 "copy": true, 00:15:20.194 "nvme_iov_md": false 00:15:20.194 }, 00:15:20.194 "memory_domains": [ 00:15:20.194 { 00:15:20.194 "dma_device_id": "system", 00:15:20.194 "dma_device_type": 1 00:15:20.194 }, 00:15:20.194 { 00:15:20.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.194 "dma_device_type": 2 00:15:20.194 } 00:15:20.194 ], 00:15:20.194 "driver_specific": {} 00:15:20.194 } 00:15:20.194 ] 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.194 "name": "Existed_Raid", 00:15:20.194 "uuid": "e549ab2e-a0e1-43f8-a88b-9d9900329521", 00:15:20.194 "strip_size_kb": 64, 00:15:20.194 "state": "configuring", 00:15:20.194 "raid_level": "raid0", 00:15:20.194 "superblock": true, 00:15:20.194 "num_base_bdevs": 4, 00:15:20.194 "num_base_bdevs_discovered": 1, 00:15:20.194 "num_base_bdevs_operational": 4, 00:15:20.194 "base_bdevs_list": [ 00:15:20.194 { 00:15:20.194 "name": "BaseBdev1", 00:15:20.194 "uuid": "51fda6b0-0a06-4e4e-9990-4871985a21bd", 00:15:20.194 "is_configured": true, 00:15:20.194 "data_offset": 2048, 00:15:20.194 "data_size": 63488 00:15:20.194 }, 00:15:20.194 { 00:15:20.194 "name": "BaseBdev2", 00:15:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.194 "is_configured": false, 00:15:20.194 "data_offset": 0, 00:15:20.194 "data_size": 0 00:15:20.194 }, 00:15:20.194 { 00:15:20.194 "name": "BaseBdev3", 00:15:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.194 "is_configured": false, 00:15:20.194 "data_offset": 0, 00:15:20.194 "data_size": 0 00:15:20.194 }, 00:15:20.194 { 00:15:20.194 "name": "BaseBdev4", 00:15:20.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.194 "is_configured": false, 00:15:20.194 "data_offset": 0, 00:15:20.194 "data_size": 0 00:15:20.194 } 00:15:20.194 ] 00:15:20.194 }' 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.194 18:18:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.761 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:21.021 [2024-07-24 18:18:29.403918] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.021 [2024-07-24 18:18:29.403949] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2023a50 name Existed_Raid, state configuring 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:21.021 [2024-07-24 18:18:29.572380] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.021 [2024-07-24 18:18:29.573398] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.021 [2024-07-24 18:18:29.573422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.021 [2024-07-24 18:18:29.573428] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.021 [2024-07-24 18:18:29.573436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.021 [2024-07-24 18:18:29.573441] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:21.021 [2024-07-24 18:18:29.573448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.021 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.280 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.280 "name": "Existed_Raid", 00:15:21.280 "uuid": "d63abf09-b121-4cce-9d84-0a214b9122a4", 00:15:21.280 "strip_size_kb": 64, 00:15:21.280 "state": "configuring", 00:15:21.280 "raid_level": "raid0", 00:15:21.280 "superblock": true, 00:15:21.280 "num_base_bdevs": 4, 00:15:21.280 "num_base_bdevs_discovered": 1, 00:15:21.280 "num_base_bdevs_operational": 4, 00:15:21.280 "base_bdevs_list": [ 00:15:21.280 { 00:15:21.280 "name": "BaseBdev1", 00:15:21.280 "uuid": "51fda6b0-0a06-4e4e-9990-4871985a21bd", 00:15:21.280 "is_configured": true, 00:15:21.280 "data_offset": 2048, 00:15:21.280 "data_size": 63488 00:15:21.280 }, 00:15:21.280 { 00:15:21.280 "name": "BaseBdev2", 00:15:21.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.280 "is_configured": false, 00:15:21.280 "data_offset": 0, 00:15:21.280 "data_size": 0 00:15:21.280 }, 00:15:21.280 { 00:15:21.280 "name": "BaseBdev3", 00:15:21.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.280 "is_configured": false, 00:15:21.280 "data_offset": 0, 00:15:21.280 "data_size": 0 00:15:21.280 }, 00:15:21.280 { 00:15:21.280 "name": "BaseBdev4", 00:15:21.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.281 "is_configured": false, 00:15:21.281 "data_offset": 0, 00:15:21.281 "data_size": 0 00:15:21.281 } 00:15:21.281 ] 00:15:21.281 }' 00:15:21.281 18:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.281 18:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.848 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:21.848 [2024-07-24 18:18:30.441318] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:21.848 BaseBdev2 00:15:22.108 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:22.108 18:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:22.108 18:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:22.108 18:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:22.108 18:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:22.108 18:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:22.108 18:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:22.108 18:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:22.368 [ 00:15:22.368 { 00:15:22.368 "name": "BaseBdev2", 00:15:22.368 "aliases": [ 00:15:22.368 "cc7b39e8-94ab-45a6-adb8-6b3a47d628cb" 00:15:22.368 ], 00:15:22.368 "product_name": "Malloc disk", 00:15:22.368 "block_size": 512, 00:15:22.368 "num_blocks": 65536, 00:15:22.368 "uuid": "cc7b39e8-94ab-45a6-adb8-6b3a47d628cb", 00:15:22.368 "assigned_rate_limits": { 00:15:22.368 "rw_ios_per_sec": 0, 00:15:22.368 "rw_mbytes_per_sec": 0, 00:15:22.368 "r_mbytes_per_sec": 0, 00:15:22.368 "w_mbytes_per_sec": 0 00:15:22.368 }, 00:15:22.368 "claimed": true, 00:15:22.368 "claim_type": "exclusive_write", 00:15:22.368 "zoned": false, 00:15:22.368 "supported_io_types": { 00:15:22.368 "read": true, 00:15:22.368 "write": true, 00:15:22.368 "unmap": true, 00:15:22.368 "flush": true, 00:15:22.368 "reset": true, 00:15:22.368 "nvme_admin": false, 00:15:22.368 "nvme_io": false, 00:15:22.368 "nvme_io_md": false, 00:15:22.368 "write_zeroes": true, 00:15:22.368 "zcopy": true, 00:15:22.368 "get_zone_info": false, 00:15:22.368 "zone_management": false, 00:15:22.368 "zone_append": false, 00:15:22.368 "compare": false, 00:15:22.368 "compare_and_write": false, 00:15:22.368 "abort": true, 00:15:22.368 "seek_hole": false, 00:15:22.368 "seek_data": false, 00:15:22.368 "copy": true, 00:15:22.368 "nvme_iov_md": false 00:15:22.368 }, 00:15:22.368 "memory_domains": [ 00:15:22.368 { 00:15:22.368 "dma_device_id": "system", 00:15:22.368 "dma_device_type": 1 00:15:22.368 }, 00:15:22.368 { 00:15:22.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.368 "dma_device_type": 2 00:15:22.368 } 00:15:22.368 ], 00:15:22.368 "driver_specific": {} 00:15:22.368 } 00:15:22.368 ] 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.368 "name": "Existed_Raid", 00:15:22.368 "uuid": "d63abf09-b121-4cce-9d84-0a214b9122a4", 00:15:22.368 "strip_size_kb": 64, 00:15:22.368 "state": "configuring", 00:15:22.368 "raid_level": "raid0", 00:15:22.368 "superblock": true, 00:15:22.368 "num_base_bdevs": 4, 00:15:22.368 "num_base_bdevs_discovered": 2, 00:15:22.368 "num_base_bdevs_operational": 4, 00:15:22.368 "base_bdevs_list": [ 00:15:22.368 { 00:15:22.368 "name": "BaseBdev1", 00:15:22.368 "uuid": "51fda6b0-0a06-4e4e-9990-4871985a21bd", 00:15:22.368 "is_configured": true, 00:15:22.368 "data_offset": 2048, 00:15:22.368 "data_size": 63488 00:15:22.368 }, 00:15:22.368 { 00:15:22.368 "name": "BaseBdev2", 00:15:22.368 "uuid": "cc7b39e8-94ab-45a6-adb8-6b3a47d628cb", 00:15:22.368 "is_configured": true, 00:15:22.368 "data_offset": 2048, 00:15:22.368 "data_size": 63488 00:15:22.368 }, 00:15:22.368 { 00:15:22.368 "name": "BaseBdev3", 00:15:22.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.368 "is_configured": false, 00:15:22.368 "data_offset": 0, 00:15:22.368 "data_size": 0 00:15:22.368 }, 00:15:22.368 { 00:15:22.368 "name": "BaseBdev4", 00:15:22.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.368 "is_configured": false, 00:15:22.368 "data_offset": 0, 00:15:22.368 "data_size": 0 00:15:22.368 } 00:15:22.368 ] 00:15:22.368 }' 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.368 18:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.937 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:23.197 [2024-07-24 18:18:31.607139] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.197 BaseBdev3 00:15:23.197 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:23.197 18:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:23.197 18:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:23.197 18:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:23.197 18:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:23.197 18:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:23.197 18:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.456 18:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:23.456 [ 00:15:23.456 { 00:15:23.456 "name": "BaseBdev3", 00:15:23.456 "aliases": [ 00:15:23.456 "148335ed-c7bf-45f0-8d6a-081a88d8d31f" 00:15:23.456 ], 00:15:23.456 "product_name": "Malloc disk", 00:15:23.456 "block_size": 512, 00:15:23.456 "num_blocks": 65536, 00:15:23.456 "uuid": "148335ed-c7bf-45f0-8d6a-081a88d8d31f", 00:15:23.456 "assigned_rate_limits": { 00:15:23.456 "rw_ios_per_sec": 0, 00:15:23.456 "rw_mbytes_per_sec": 0, 00:15:23.456 "r_mbytes_per_sec": 0, 00:15:23.456 "w_mbytes_per_sec": 0 00:15:23.456 }, 00:15:23.457 "claimed": true, 00:15:23.457 "claim_type": "exclusive_write", 00:15:23.457 "zoned": false, 00:15:23.457 "supported_io_types": { 00:15:23.457 "read": true, 00:15:23.457 "write": true, 00:15:23.457 "unmap": true, 00:15:23.457 "flush": true, 00:15:23.457 "reset": true, 00:15:23.457 "nvme_admin": false, 00:15:23.457 "nvme_io": false, 00:15:23.457 "nvme_io_md": false, 00:15:23.457 "write_zeroes": true, 00:15:23.457 "zcopy": true, 00:15:23.457 "get_zone_info": false, 00:15:23.457 "zone_management": false, 00:15:23.457 "zone_append": false, 00:15:23.457 "compare": false, 00:15:23.457 "compare_and_write": false, 00:15:23.457 "abort": true, 00:15:23.457 "seek_hole": false, 00:15:23.457 "seek_data": false, 00:15:23.457 "copy": true, 00:15:23.457 "nvme_iov_md": false 00:15:23.457 }, 00:15:23.457 "memory_domains": [ 00:15:23.457 { 00:15:23.457 "dma_device_id": "system", 00:15:23.457 "dma_device_type": 1 00:15:23.457 }, 00:15:23.457 { 00:15:23.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.457 "dma_device_type": 2 00:15:23.457 } 00:15:23.457 ], 00:15:23.457 "driver_specific": {} 00:15:23.457 } 00:15:23.457 ] 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.457 18:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.716 18:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.716 "name": "Existed_Raid", 00:15:23.716 "uuid": "d63abf09-b121-4cce-9d84-0a214b9122a4", 00:15:23.716 "strip_size_kb": 64, 00:15:23.716 "state": "configuring", 00:15:23.716 "raid_level": "raid0", 00:15:23.716 "superblock": true, 00:15:23.716 "num_base_bdevs": 4, 00:15:23.716 "num_base_bdevs_discovered": 3, 00:15:23.716 "num_base_bdevs_operational": 4, 00:15:23.716 "base_bdevs_list": [ 00:15:23.716 { 00:15:23.716 "name": "BaseBdev1", 00:15:23.716 "uuid": "51fda6b0-0a06-4e4e-9990-4871985a21bd", 00:15:23.716 "is_configured": true, 00:15:23.716 "data_offset": 2048, 00:15:23.716 "data_size": 63488 00:15:23.716 }, 00:15:23.716 { 00:15:23.716 "name": "BaseBdev2", 00:15:23.716 "uuid": "cc7b39e8-94ab-45a6-adb8-6b3a47d628cb", 00:15:23.716 "is_configured": true, 00:15:23.716 "data_offset": 2048, 00:15:23.716 "data_size": 63488 00:15:23.716 }, 00:15:23.716 { 00:15:23.716 "name": "BaseBdev3", 00:15:23.716 "uuid": "148335ed-c7bf-45f0-8d6a-081a88d8d31f", 00:15:23.716 "is_configured": true, 00:15:23.716 "data_offset": 2048, 00:15:23.716 "data_size": 63488 00:15:23.716 }, 00:15:23.716 { 00:15:23.716 "name": "BaseBdev4", 00:15:23.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.716 "is_configured": false, 00:15:23.716 "data_offset": 0, 00:15:23.716 "data_size": 0 00:15:23.716 } 00:15:23.716 ] 00:15:23.716 }' 00:15:23.716 18:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.717 18:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.285 18:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:24.285 [2024-07-24 18:18:32.788892] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:24.285 [2024-07-24 18:18:32.789026] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2024ab0 00:15:24.285 [2024-07-24 18:18:32.789035] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:24.285 [2024-07-24 18:18:32.789159] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21d7cd0 00:15:24.285 [2024-07-24 18:18:32.789237] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2024ab0 00:15:24.285 [2024-07-24 18:18:32.789243] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2024ab0 00:15:24.285 [2024-07-24 18:18:32.789303] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.285 BaseBdev4 00:15:24.285 18:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:24.285 18:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:24.285 18:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:24.285 18:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:24.285 18:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:24.285 18:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:24.285 18:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:24.544 18:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:24.544 [ 00:15:24.544 { 00:15:24.544 "name": "BaseBdev4", 00:15:24.544 "aliases": [ 00:15:24.544 "cafb5faa-78cf-41a2-8a31-339a30065b41" 00:15:24.544 ], 00:15:24.544 "product_name": "Malloc disk", 00:15:24.544 "block_size": 512, 00:15:24.544 "num_blocks": 65536, 00:15:24.544 "uuid": "cafb5faa-78cf-41a2-8a31-339a30065b41", 00:15:24.544 "assigned_rate_limits": { 00:15:24.544 "rw_ios_per_sec": 0, 00:15:24.544 "rw_mbytes_per_sec": 0, 00:15:24.544 "r_mbytes_per_sec": 0, 00:15:24.544 "w_mbytes_per_sec": 0 00:15:24.544 }, 00:15:24.544 "claimed": true, 00:15:24.544 "claim_type": "exclusive_write", 00:15:24.544 "zoned": false, 00:15:24.544 "supported_io_types": { 00:15:24.544 "read": true, 00:15:24.544 "write": true, 00:15:24.544 "unmap": true, 00:15:24.544 "flush": true, 00:15:24.544 "reset": true, 00:15:24.544 "nvme_admin": false, 00:15:24.544 "nvme_io": false, 00:15:24.544 "nvme_io_md": false, 00:15:24.544 "write_zeroes": true, 00:15:24.544 "zcopy": true, 00:15:24.544 "get_zone_info": false, 00:15:24.544 "zone_management": false, 00:15:24.544 "zone_append": false, 00:15:24.544 "compare": false, 00:15:24.544 "compare_and_write": false, 00:15:24.544 "abort": true, 00:15:24.544 "seek_hole": false, 00:15:24.544 "seek_data": false, 00:15:24.544 "copy": true, 00:15:24.544 "nvme_iov_md": false 00:15:24.544 }, 00:15:24.544 "memory_domains": [ 00:15:24.544 { 00:15:24.544 "dma_device_id": "system", 00:15:24.544 "dma_device_type": 1 00:15:24.544 }, 00:15:24.544 { 00:15:24.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.544 "dma_device_type": 2 00:15:24.544 } 00:15:24.544 ], 00:15:24.544 "driver_specific": {} 00:15:24.544 } 00:15:24.544 ] 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.803 "name": "Existed_Raid", 00:15:24.803 "uuid": "d63abf09-b121-4cce-9d84-0a214b9122a4", 00:15:24.803 "strip_size_kb": 64, 00:15:24.803 "state": "online", 00:15:24.803 "raid_level": "raid0", 00:15:24.803 "superblock": true, 00:15:24.803 "num_base_bdevs": 4, 00:15:24.803 "num_base_bdevs_discovered": 4, 00:15:24.803 "num_base_bdevs_operational": 4, 00:15:24.803 "base_bdevs_list": [ 00:15:24.803 { 00:15:24.803 "name": "BaseBdev1", 00:15:24.803 "uuid": "51fda6b0-0a06-4e4e-9990-4871985a21bd", 00:15:24.803 "is_configured": true, 00:15:24.803 "data_offset": 2048, 00:15:24.803 "data_size": 63488 00:15:24.803 }, 00:15:24.803 { 00:15:24.803 "name": "BaseBdev2", 00:15:24.803 "uuid": "cc7b39e8-94ab-45a6-adb8-6b3a47d628cb", 00:15:24.803 "is_configured": true, 00:15:24.803 "data_offset": 2048, 00:15:24.803 "data_size": 63488 00:15:24.803 }, 00:15:24.803 { 00:15:24.803 "name": "BaseBdev3", 00:15:24.803 "uuid": "148335ed-c7bf-45f0-8d6a-081a88d8d31f", 00:15:24.803 "is_configured": true, 00:15:24.803 "data_offset": 2048, 00:15:24.803 "data_size": 63488 00:15:24.803 }, 00:15:24.803 { 00:15:24.803 "name": "BaseBdev4", 00:15:24.803 "uuid": "cafb5faa-78cf-41a2-8a31-339a30065b41", 00:15:24.803 "is_configured": true, 00:15:24.803 "data_offset": 2048, 00:15:24.803 "data_size": 63488 00:15:24.803 } 00:15:24.803 ] 00:15:24.803 }' 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.803 18:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.371 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:25.371 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:25.371 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:25.371 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:25.371 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:25.371 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:25.371 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:25.371 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:25.371 [2024-07-24 18:18:33.964136] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.630 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:25.630 "name": "Existed_Raid", 00:15:25.630 "aliases": [ 00:15:25.630 "d63abf09-b121-4cce-9d84-0a214b9122a4" 00:15:25.630 ], 00:15:25.630 "product_name": "Raid Volume", 00:15:25.630 "block_size": 512, 00:15:25.630 "num_blocks": 253952, 00:15:25.630 "uuid": "d63abf09-b121-4cce-9d84-0a214b9122a4", 00:15:25.630 "assigned_rate_limits": { 00:15:25.630 "rw_ios_per_sec": 0, 00:15:25.630 "rw_mbytes_per_sec": 0, 00:15:25.630 "r_mbytes_per_sec": 0, 00:15:25.630 "w_mbytes_per_sec": 0 00:15:25.630 }, 00:15:25.630 "claimed": false, 00:15:25.630 "zoned": false, 00:15:25.630 "supported_io_types": { 00:15:25.630 "read": true, 00:15:25.630 "write": true, 00:15:25.630 "unmap": true, 00:15:25.630 "flush": true, 00:15:25.630 "reset": true, 00:15:25.630 "nvme_admin": false, 00:15:25.630 "nvme_io": false, 00:15:25.630 "nvme_io_md": false, 00:15:25.630 "write_zeroes": true, 00:15:25.630 "zcopy": false, 00:15:25.630 "get_zone_info": false, 00:15:25.630 "zone_management": false, 00:15:25.630 "zone_append": false, 00:15:25.630 "compare": false, 00:15:25.630 "compare_and_write": false, 00:15:25.630 "abort": false, 00:15:25.630 "seek_hole": false, 00:15:25.630 "seek_data": false, 00:15:25.630 "copy": false, 00:15:25.630 "nvme_iov_md": false 00:15:25.630 }, 00:15:25.630 "memory_domains": [ 00:15:25.630 { 00:15:25.630 "dma_device_id": "system", 00:15:25.630 "dma_device_type": 1 00:15:25.630 }, 00:15:25.630 { 00:15:25.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.630 "dma_device_type": 2 00:15:25.630 }, 00:15:25.630 { 00:15:25.630 "dma_device_id": "system", 00:15:25.630 "dma_device_type": 1 00:15:25.630 }, 00:15:25.630 { 00:15:25.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.630 "dma_device_type": 2 00:15:25.630 }, 00:15:25.630 { 00:15:25.630 "dma_device_id": "system", 00:15:25.630 "dma_device_type": 1 00:15:25.630 }, 00:15:25.630 { 00:15:25.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.630 "dma_device_type": 2 00:15:25.630 }, 00:15:25.630 { 00:15:25.630 "dma_device_id": "system", 00:15:25.630 "dma_device_type": 1 00:15:25.630 }, 00:15:25.630 { 00:15:25.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.630 "dma_device_type": 2 00:15:25.630 } 00:15:25.630 ], 00:15:25.630 "driver_specific": { 00:15:25.630 "raid": { 00:15:25.630 "uuid": "d63abf09-b121-4cce-9d84-0a214b9122a4", 00:15:25.630 "strip_size_kb": 64, 00:15:25.630 "state": "online", 00:15:25.630 "raid_level": "raid0", 00:15:25.630 "superblock": true, 00:15:25.630 "num_base_bdevs": 4, 00:15:25.630 "num_base_bdevs_discovered": 4, 00:15:25.630 "num_base_bdevs_operational": 4, 00:15:25.630 "base_bdevs_list": [ 00:15:25.630 { 00:15:25.630 "name": "BaseBdev1", 00:15:25.630 "uuid": "51fda6b0-0a06-4e4e-9990-4871985a21bd", 00:15:25.630 "is_configured": true, 00:15:25.630 "data_offset": 2048, 00:15:25.630 "data_size": 63488 00:15:25.630 }, 00:15:25.630 { 00:15:25.630 "name": "BaseBdev2", 00:15:25.630 "uuid": "cc7b39e8-94ab-45a6-adb8-6b3a47d628cb", 00:15:25.630 "is_configured": true, 00:15:25.630 "data_offset": 2048, 00:15:25.630 "data_size": 63488 00:15:25.630 }, 00:15:25.630 { 00:15:25.630 "name": "BaseBdev3", 00:15:25.630 "uuid": "148335ed-c7bf-45f0-8d6a-081a88d8d31f", 00:15:25.630 "is_configured": true, 00:15:25.630 "data_offset": 2048, 00:15:25.630 "data_size": 63488 00:15:25.630 }, 00:15:25.630 { 00:15:25.630 "name": "BaseBdev4", 00:15:25.630 "uuid": "cafb5faa-78cf-41a2-8a31-339a30065b41", 00:15:25.630 "is_configured": true, 00:15:25.630 "data_offset": 2048, 00:15:25.630 "data_size": 63488 00:15:25.630 } 00:15:25.630 ] 00:15:25.630 } 00:15:25.630 } 00:15:25.630 }' 00:15:25.630 18:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:25.630 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:25.630 BaseBdev2 00:15:25.630 BaseBdev3 00:15:25.630 BaseBdev4' 00:15:25.630 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:25.630 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:25.630 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:25.630 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:25.630 "name": "BaseBdev1", 00:15:25.630 "aliases": [ 00:15:25.630 "51fda6b0-0a06-4e4e-9990-4871985a21bd" 00:15:25.630 ], 00:15:25.630 "product_name": "Malloc disk", 00:15:25.630 "block_size": 512, 00:15:25.630 "num_blocks": 65536, 00:15:25.630 "uuid": "51fda6b0-0a06-4e4e-9990-4871985a21bd", 00:15:25.630 "assigned_rate_limits": { 00:15:25.630 "rw_ios_per_sec": 0, 00:15:25.630 "rw_mbytes_per_sec": 0, 00:15:25.630 "r_mbytes_per_sec": 0, 00:15:25.630 "w_mbytes_per_sec": 0 00:15:25.630 }, 00:15:25.630 "claimed": true, 00:15:25.630 "claim_type": "exclusive_write", 00:15:25.630 "zoned": false, 00:15:25.630 "supported_io_types": { 00:15:25.630 "read": true, 00:15:25.630 "write": true, 00:15:25.630 "unmap": true, 00:15:25.630 "flush": true, 00:15:25.630 "reset": true, 00:15:25.630 "nvme_admin": false, 00:15:25.630 "nvme_io": false, 00:15:25.630 "nvme_io_md": false, 00:15:25.630 "write_zeroes": true, 00:15:25.630 "zcopy": true, 00:15:25.630 "get_zone_info": false, 00:15:25.630 "zone_management": false, 00:15:25.630 "zone_append": false, 00:15:25.630 "compare": false, 00:15:25.630 "compare_and_write": false, 00:15:25.630 "abort": true, 00:15:25.630 "seek_hole": false, 00:15:25.630 "seek_data": false, 00:15:25.630 "copy": true, 00:15:25.630 "nvme_iov_md": false 00:15:25.630 }, 00:15:25.630 "memory_domains": [ 00:15:25.630 { 00:15:25.630 "dma_device_id": "system", 00:15:25.630 "dma_device_type": 1 00:15:25.630 }, 00:15:25.630 { 00:15:25.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.630 "dma_device_type": 2 00:15:25.630 } 00:15:25.630 ], 00:15:25.630 "driver_specific": {} 00:15:25.630 }' 00:15:25.630 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:25.889 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:25.889 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:25.889 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:25.889 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:25.889 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:25.889 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:25.889 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:25.889 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:25.889 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:25.889 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.148 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.148 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:26.148 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:26.148 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:26.148 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:26.148 "name": "BaseBdev2", 00:15:26.148 "aliases": [ 00:15:26.148 "cc7b39e8-94ab-45a6-adb8-6b3a47d628cb" 00:15:26.148 ], 00:15:26.148 "product_name": "Malloc disk", 00:15:26.148 "block_size": 512, 00:15:26.148 "num_blocks": 65536, 00:15:26.148 "uuid": "cc7b39e8-94ab-45a6-adb8-6b3a47d628cb", 00:15:26.148 "assigned_rate_limits": { 00:15:26.148 "rw_ios_per_sec": 0, 00:15:26.148 "rw_mbytes_per_sec": 0, 00:15:26.148 "r_mbytes_per_sec": 0, 00:15:26.148 "w_mbytes_per_sec": 0 00:15:26.148 }, 00:15:26.148 "claimed": true, 00:15:26.148 "claim_type": "exclusive_write", 00:15:26.148 "zoned": false, 00:15:26.148 "supported_io_types": { 00:15:26.148 "read": true, 00:15:26.148 "write": true, 00:15:26.148 "unmap": true, 00:15:26.148 "flush": true, 00:15:26.148 "reset": true, 00:15:26.148 "nvme_admin": false, 00:15:26.148 "nvme_io": false, 00:15:26.148 "nvme_io_md": false, 00:15:26.148 "write_zeroes": true, 00:15:26.148 "zcopy": true, 00:15:26.148 "get_zone_info": false, 00:15:26.148 "zone_management": false, 00:15:26.148 "zone_append": false, 00:15:26.148 "compare": false, 00:15:26.148 "compare_and_write": false, 00:15:26.148 "abort": true, 00:15:26.148 "seek_hole": false, 00:15:26.148 "seek_data": false, 00:15:26.148 "copy": true, 00:15:26.148 "nvme_iov_md": false 00:15:26.148 }, 00:15:26.148 "memory_domains": [ 00:15:26.148 { 00:15:26.148 "dma_device_id": "system", 00:15:26.148 "dma_device_type": 1 00:15:26.148 }, 00:15:26.148 { 00:15:26.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.148 "dma_device_type": 2 00:15:26.148 } 00:15:26.148 ], 00:15:26.148 "driver_specific": {} 00:15:26.148 }' 00:15:26.148 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.148 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.148 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:26.148 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.407 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.407 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:26.407 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.407 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.407 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:26.407 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.407 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.407 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.407 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:26.407 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:26.407 18:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:26.700 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:26.700 "name": "BaseBdev3", 00:15:26.700 "aliases": [ 00:15:26.700 "148335ed-c7bf-45f0-8d6a-081a88d8d31f" 00:15:26.700 ], 00:15:26.700 "product_name": "Malloc disk", 00:15:26.700 "block_size": 512, 00:15:26.700 "num_blocks": 65536, 00:15:26.700 "uuid": "148335ed-c7bf-45f0-8d6a-081a88d8d31f", 00:15:26.700 "assigned_rate_limits": { 00:15:26.700 "rw_ios_per_sec": 0, 00:15:26.700 "rw_mbytes_per_sec": 0, 00:15:26.700 "r_mbytes_per_sec": 0, 00:15:26.700 "w_mbytes_per_sec": 0 00:15:26.700 }, 00:15:26.700 "claimed": true, 00:15:26.700 "claim_type": "exclusive_write", 00:15:26.700 "zoned": false, 00:15:26.700 "supported_io_types": { 00:15:26.700 "read": true, 00:15:26.700 "write": true, 00:15:26.700 "unmap": true, 00:15:26.700 "flush": true, 00:15:26.700 "reset": true, 00:15:26.700 "nvme_admin": false, 00:15:26.700 "nvme_io": false, 00:15:26.700 "nvme_io_md": false, 00:15:26.700 "write_zeroes": true, 00:15:26.700 "zcopy": true, 00:15:26.700 "get_zone_info": false, 00:15:26.700 "zone_management": false, 00:15:26.700 "zone_append": false, 00:15:26.700 "compare": false, 00:15:26.700 "compare_and_write": false, 00:15:26.700 "abort": true, 00:15:26.700 "seek_hole": false, 00:15:26.701 "seek_data": false, 00:15:26.701 "copy": true, 00:15:26.701 "nvme_iov_md": false 00:15:26.701 }, 00:15:26.701 "memory_domains": [ 00:15:26.701 { 00:15:26.701 "dma_device_id": "system", 00:15:26.701 "dma_device_type": 1 00:15:26.701 }, 00:15:26.701 { 00:15:26.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.701 "dma_device_type": 2 00:15:26.701 } 00:15:26.701 ], 00:15:26.701 "driver_specific": {} 00:15:26.701 }' 00:15:26.701 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.701 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.701 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:26.701 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.701 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.701 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:26.701 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.960 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.960 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:26.960 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.960 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.960 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.960 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:26.960 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:26.960 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:27.218 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:27.218 "name": "BaseBdev4", 00:15:27.218 "aliases": [ 00:15:27.218 "cafb5faa-78cf-41a2-8a31-339a30065b41" 00:15:27.218 ], 00:15:27.218 "product_name": "Malloc disk", 00:15:27.218 "block_size": 512, 00:15:27.218 "num_blocks": 65536, 00:15:27.218 "uuid": "cafb5faa-78cf-41a2-8a31-339a30065b41", 00:15:27.218 "assigned_rate_limits": { 00:15:27.219 "rw_ios_per_sec": 0, 00:15:27.219 "rw_mbytes_per_sec": 0, 00:15:27.219 "r_mbytes_per_sec": 0, 00:15:27.219 "w_mbytes_per_sec": 0 00:15:27.219 }, 00:15:27.219 "claimed": true, 00:15:27.219 "claim_type": "exclusive_write", 00:15:27.219 "zoned": false, 00:15:27.219 "supported_io_types": { 00:15:27.219 "read": true, 00:15:27.219 "write": true, 00:15:27.219 "unmap": true, 00:15:27.219 "flush": true, 00:15:27.219 "reset": true, 00:15:27.219 "nvme_admin": false, 00:15:27.219 "nvme_io": false, 00:15:27.219 "nvme_io_md": false, 00:15:27.219 "write_zeroes": true, 00:15:27.219 "zcopy": true, 00:15:27.219 "get_zone_info": false, 00:15:27.219 "zone_management": false, 00:15:27.219 "zone_append": false, 00:15:27.219 "compare": false, 00:15:27.219 "compare_and_write": false, 00:15:27.219 "abort": true, 00:15:27.219 "seek_hole": false, 00:15:27.219 "seek_data": false, 00:15:27.219 "copy": true, 00:15:27.219 "nvme_iov_md": false 00:15:27.219 }, 00:15:27.219 "memory_domains": [ 00:15:27.219 { 00:15:27.219 "dma_device_id": "system", 00:15:27.219 "dma_device_type": 1 00:15:27.219 }, 00:15:27.219 { 00:15:27.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.219 "dma_device_type": 2 00:15:27.219 } 00:15:27.219 ], 00:15:27.219 "driver_specific": {} 00:15:27.219 }' 00:15:27.219 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.219 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.219 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:27.219 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.219 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.219 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:27.219 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.219 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.478 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:27.478 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.478 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.478 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:27.478 18:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:27.478 [2024-07-24 18:18:36.065374] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.478 [2024-07-24 18:18:36.065393] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.478 [2024-07-24 18:18:36.065427] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.736 "name": "Existed_Raid", 00:15:27.736 "uuid": "d63abf09-b121-4cce-9d84-0a214b9122a4", 00:15:27.736 "strip_size_kb": 64, 00:15:27.736 "state": "offline", 00:15:27.736 "raid_level": "raid0", 00:15:27.736 "superblock": true, 00:15:27.736 "num_base_bdevs": 4, 00:15:27.736 "num_base_bdevs_discovered": 3, 00:15:27.736 "num_base_bdevs_operational": 3, 00:15:27.736 "base_bdevs_list": [ 00:15:27.736 { 00:15:27.736 "name": null, 00:15:27.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.736 "is_configured": false, 00:15:27.736 "data_offset": 2048, 00:15:27.736 "data_size": 63488 00:15:27.736 }, 00:15:27.736 { 00:15:27.736 "name": "BaseBdev2", 00:15:27.736 "uuid": "cc7b39e8-94ab-45a6-adb8-6b3a47d628cb", 00:15:27.736 "is_configured": true, 00:15:27.736 "data_offset": 2048, 00:15:27.736 "data_size": 63488 00:15:27.736 }, 00:15:27.736 { 00:15:27.736 "name": "BaseBdev3", 00:15:27.736 "uuid": "148335ed-c7bf-45f0-8d6a-081a88d8d31f", 00:15:27.736 "is_configured": true, 00:15:27.736 "data_offset": 2048, 00:15:27.736 "data_size": 63488 00:15:27.736 }, 00:15:27.736 { 00:15:27.736 "name": "BaseBdev4", 00:15:27.736 "uuid": "cafb5faa-78cf-41a2-8a31-339a30065b41", 00:15:27.736 "is_configured": true, 00:15:27.736 "data_offset": 2048, 00:15:27.736 "data_size": 63488 00:15:27.736 } 00:15:27.736 ] 00:15:27.736 }' 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.736 18:18:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.303 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:28.303 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:28.303 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.303 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:28.562 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:28.562 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.562 18:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:28.562 [2024-07-24 18:18:37.064768] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:28.562 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:28.562 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:28.562 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.562 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:28.821 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:28.821 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.821 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:28.821 [2024-07-24 18:18:37.415393] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:29.079 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:29.079 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:29.079 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.079 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:29.079 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:29.079 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.079 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:29.338 [2024-07-24 18:18:37.769852] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:29.338 [2024-07-24 18:18:37.769887] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2024ab0 name Existed_Raid, state offline 00:15:29.338 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:29.338 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:29.338 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.338 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:29.596 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:29.596 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:29.596 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:29.596 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:29.596 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:29.596 18:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:29.596 BaseBdev2 00:15:29.596 18:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:29.596 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:29.596 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:29.596 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:29.596 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:29.596 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:29.596 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:29.856 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:30.115 [ 00:15:30.115 { 00:15:30.115 "name": "BaseBdev2", 00:15:30.115 "aliases": [ 00:15:30.115 "af2f24e7-04b7-41a3-b7fe-52e9d007fe67" 00:15:30.115 ], 00:15:30.115 "product_name": "Malloc disk", 00:15:30.115 "block_size": 512, 00:15:30.115 "num_blocks": 65536, 00:15:30.115 "uuid": "af2f24e7-04b7-41a3-b7fe-52e9d007fe67", 00:15:30.115 "assigned_rate_limits": { 00:15:30.115 "rw_ios_per_sec": 0, 00:15:30.115 "rw_mbytes_per_sec": 0, 00:15:30.115 "r_mbytes_per_sec": 0, 00:15:30.115 "w_mbytes_per_sec": 0 00:15:30.115 }, 00:15:30.115 "claimed": false, 00:15:30.115 "zoned": false, 00:15:30.115 "supported_io_types": { 00:15:30.115 "read": true, 00:15:30.115 "write": true, 00:15:30.115 "unmap": true, 00:15:30.115 "flush": true, 00:15:30.115 "reset": true, 00:15:30.115 "nvme_admin": false, 00:15:30.115 "nvme_io": false, 00:15:30.115 "nvme_io_md": false, 00:15:30.115 "write_zeroes": true, 00:15:30.115 "zcopy": true, 00:15:30.115 "get_zone_info": false, 00:15:30.115 "zone_management": false, 00:15:30.115 "zone_append": false, 00:15:30.115 "compare": false, 00:15:30.115 "compare_and_write": false, 00:15:30.115 "abort": true, 00:15:30.115 "seek_hole": false, 00:15:30.115 "seek_data": false, 00:15:30.115 "copy": true, 00:15:30.115 "nvme_iov_md": false 00:15:30.115 }, 00:15:30.115 "memory_domains": [ 00:15:30.115 { 00:15:30.115 "dma_device_id": "system", 00:15:30.115 "dma_device_type": 1 00:15:30.115 }, 00:15:30.115 { 00:15:30.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.115 "dma_device_type": 2 00:15:30.115 } 00:15:30.115 ], 00:15:30.115 "driver_specific": {} 00:15:30.115 } 00:15:30.115 ] 00:15:30.115 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:30.115 18:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:30.115 18:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:30.115 18:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:30.115 BaseBdev3 00:15:30.115 18:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:30.115 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:30.115 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:30.115 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:30.115 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:30.115 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:30.115 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.375 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:30.375 [ 00:15:30.375 { 00:15:30.375 "name": "BaseBdev3", 00:15:30.375 "aliases": [ 00:15:30.375 "c04baeeb-a1b9-40b4-8e56-dfe8ffdc189e" 00:15:30.375 ], 00:15:30.375 "product_name": "Malloc disk", 00:15:30.375 "block_size": 512, 00:15:30.375 "num_blocks": 65536, 00:15:30.375 "uuid": "c04baeeb-a1b9-40b4-8e56-dfe8ffdc189e", 00:15:30.375 "assigned_rate_limits": { 00:15:30.375 "rw_ios_per_sec": 0, 00:15:30.375 "rw_mbytes_per_sec": 0, 00:15:30.375 "r_mbytes_per_sec": 0, 00:15:30.375 "w_mbytes_per_sec": 0 00:15:30.375 }, 00:15:30.375 "claimed": false, 00:15:30.375 "zoned": false, 00:15:30.375 "supported_io_types": { 00:15:30.375 "read": true, 00:15:30.375 "write": true, 00:15:30.375 "unmap": true, 00:15:30.375 "flush": true, 00:15:30.375 "reset": true, 00:15:30.375 "nvme_admin": false, 00:15:30.375 "nvme_io": false, 00:15:30.375 "nvme_io_md": false, 00:15:30.375 "write_zeroes": true, 00:15:30.375 "zcopy": true, 00:15:30.375 "get_zone_info": false, 00:15:30.375 "zone_management": false, 00:15:30.375 "zone_append": false, 00:15:30.375 "compare": false, 00:15:30.375 "compare_and_write": false, 00:15:30.375 "abort": true, 00:15:30.375 "seek_hole": false, 00:15:30.375 "seek_data": false, 00:15:30.375 "copy": true, 00:15:30.375 "nvme_iov_md": false 00:15:30.375 }, 00:15:30.375 "memory_domains": [ 00:15:30.375 { 00:15:30.375 "dma_device_id": "system", 00:15:30.375 "dma_device_type": 1 00:15:30.375 }, 00:15:30.375 { 00:15:30.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.375 "dma_device_type": 2 00:15:30.375 } 00:15:30.375 ], 00:15:30.375 "driver_specific": {} 00:15:30.375 } 00:15:30.375 ] 00:15:30.375 18:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:30.375 18:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:30.375 18:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:30.375 18:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:30.633 BaseBdev4 00:15:30.633 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:30.633 18:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:30.633 18:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:30.633 18:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:30.633 18:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:30.633 18:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:30.633 18:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.891 18:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:30.891 [ 00:15:30.891 { 00:15:30.891 "name": "BaseBdev4", 00:15:30.891 "aliases": [ 00:15:30.891 "5e66fba7-2b44-4849-bbba-5b80d4f071cb" 00:15:30.891 ], 00:15:30.891 "product_name": "Malloc disk", 00:15:30.891 "block_size": 512, 00:15:30.891 "num_blocks": 65536, 00:15:30.891 "uuid": "5e66fba7-2b44-4849-bbba-5b80d4f071cb", 00:15:30.891 "assigned_rate_limits": { 00:15:30.891 "rw_ios_per_sec": 0, 00:15:30.891 "rw_mbytes_per_sec": 0, 00:15:30.891 "r_mbytes_per_sec": 0, 00:15:30.891 "w_mbytes_per_sec": 0 00:15:30.891 }, 00:15:30.891 "claimed": false, 00:15:30.891 "zoned": false, 00:15:30.891 "supported_io_types": { 00:15:30.891 "read": true, 00:15:30.891 "write": true, 00:15:30.891 "unmap": true, 00:15:30.891 "flush": true, 00:15:30.891 "reset": true, 00:15:30.891 "nvme_admin": false, 00:15:30.891 "nvme_io": false, 00:15:30.891 "nvme_io_md": false, 00:15:30.891 "write_zeroes": true, 00:15:30.891 "zcopy": true, 00:15:30.891 "get_zone_info": false, 00:15:30.891 "zone_management": false, 00:15:30.891 "zone_append": false, 00:15:30.891 "compare": false, 00:15:30.891 "compare_and_write": false, 00:15:30.891 "abort": true, 00:15:30.891 "seek_hole": false, 00:15:30.891 "seek_data": false, 00:15:30.891 "copy": true, 00:15:30.891 "nvme_iov_md": false 00:15:30.891 }, 00:15:30.891 "memory_domains": [ 00:15:30.891 { 00:15:30.891 "dma_device_id": "system", 00:15:30.891 "dma_device_type": 1 00:15:30.891 }, 00:15:30.891 { 00:15:30.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.891 "dma_device_type": 2 00:15:30.891 } 00:15:30.891 ], 00:15:30.891 "driver_specific": {} 00:15:30.891 } 00:15:30.891 ] 00:15:30.891 18:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:30.891 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:30.891 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:30.891 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:31.149 [2024-07-24 18:18:39.615557] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.149 [2024-07-24 18:18:39.615588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.149 [2024-07-24 18:18:39.615601] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.149 [2024-07-24 18:18:39.616516] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.149 [2024-07-24 18:18:39.616546] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:31.149 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:31.149 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.149 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.149 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:31.149 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:31.149 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:31.149 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.149 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.149 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.149 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.149 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.149 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.408 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.408 "name": "Existed_Raid", 00:15:31.408 "uuid": "0abc116c-78a6-4a31-ad2c-6b6e0e7d7eb2", 00:15:31.408 "strip_size_kb": 64, 00:15:31.408 "state": "configuring", 00:15:31.408 "raid_level": "raid0", 00:15:31.408 "superblock": true, 00:15:31.408 "num_base_bdevs": 4, 00:15:31.408 "num_base_bdevs_discovered": 3, 00:15:31.408 "num_base_bdevs_operational": 4, 00:15:31.408 "base_bdevs_list": [ 00:15:31.408 { 00:15:31.408 "name": "BaseBdev1", 00:15:31.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.408 "is_configured": false, 00:15:31.408 "data_offset": 0, 00:15:31.408 "data_size": 0 00:15:31.408 }, 00:15:31.408 { 00:15:31.408 "name": "BaseBdev2", 00:15:31.408 "uuid": "af2f24e7-04b7-41a3-b7fe-52e9d007fe67", 00:15:31.408 "is_configured": true, 00:15:31.408 "data_offset": 2048, 00:15:31.408 "data_size": 63488 00:15:31.408 }, 00:15:31.408 { 00:15:31.408 "name": "BaseBdev3", 00:15:31.408 "uuid": "c04baeeb-a1b9-40b4-8e56-dfe8ffdc189e", 00:15:31.408 "is_configured": true, 00:15:31.408 "data_offset": 2048, 00:15:31.408 "data_size": 63488 00:15:31.408 }, 00:15:31.408 { 00:15:31.408 "name": "BaseBdev4", 00:15:31.408 "uuid": "5e66fba7-2b44-4849-bbba-5b80d4f071cb", 00:15:31.408 "is_configured": true, 00:15:31.408 "data_offset": 2048, 00:15:31.408 "data_size": 63488 00:15:31.408 } 00:15:31.408 ] 00:15:31.408 }' 00:15:31.408 18:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.409 18:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.977 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:31.977 [2024-07-24 18:18:40.433645] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:31.977 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:31.977 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.977 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.977 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:31.977 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:31.977 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:31.977 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.977 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.977 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.977 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.977 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.977 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.236 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.236 "name": "Existed_Raid", 00:15:32.236 "uuid": "0abc116c-78a6-4a31-ad2c-6b6e0e7d7eb2", 00:15:32.236 "strip_size_kb": 64, 00:15:32.236 "state": "configuring", 00:15:32.236 "raid_level": "raid0", 00:15:32.236 "superblock": true, 00:15:32.236 "num_base_bdevs": 4, 00:15:32.236 "num_base_bdevs_discovered": 2, 00:15:32.236 "num_base_bdevs_operational": 4, 00:15:32.236 "base_bdevs_list": [ 00:15:32.236 { 00:15:32.236 "name": "BaseBdev1", 00:15:32.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.236 "is_configured": false, 00:15:32.236 "data_offset": 0, 00:15:32.236 "data_size": 0 00:15:32.236 }, 00:15:32.236 { 00:15:32.236 "name": null, 00:15:32.236 "uuid": "af2f24e7-04b7-41a3-b7fe-52e9d007fe67", 00:15:32.236 "is_configured": false, 00:15:32.236 "data_offset": 2048, 00:15:32.236 "data_size": 63488 00:15:32.236 }, 00:15:32.236 { 00:15:32.236 "name": "BaseBdev3", 00:15:32.236 "uuid": "c04baeeb-a1b9-40b4-8e56-dfe8ffdc189e", 00:15:32.236 "is_configured": true, 00:15:32.236 "data_offset": 2048, 00:15:32.236 "data_size": 63488 00:15:32.236 }, 00:15:32.236 { 00:15:32.236 "name": "BaseBdev4", 00:15:32.236 "uuid": "5e66fba7-2b44-4849-bbba-5b80d4f071cb", 00:15:32.236 "is_configured": true, 00:15:32.236 "data_offset": 2048, 00:15:32.236 "data_size": 63488 00:15:32.236 } 00:15:32.236 ] 00:15:32.236 }' 00:15:32.236 18:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.236 18:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.804 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.804 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:32.804 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:32.804 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:33.062 [2024-07-24 18:18:41.426848] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.062 BaseBdev1 00:15:33.062 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:33.062 18:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:33.062 18:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:33.062 18:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:33.062 18:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:33.062 18:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:33.062 18:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.062 18:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:33.321 [ 00:15:33.321 { 00:15:33.321 "name": "BaseBdev1", 00:15:33.321 "aliases": [ 00:15:33.321 "5f6b5c1d-dd73-43a1-999d-c4cc6995053d" 00:15:33.321 ], 00:15:33.321 "product_name": "Malloc disk", 00:15:33.321 "block_size": 512, 00:15:33.321 "num_blocks": 65536, 00:15:33.321 "uuid": "5f6b5c1d-dd73-43a1-999d-c4cc6995053d", 00:15:33.321 "assigned_rate_limits": { 00:15:33.321 "rw_ios_per_sec": 0, 00:15:33.321 "rw_mbytes_per_sec": 0, 00:15:33.321 "r_mbytes_per_sec": 0, 00:15:33.321 "w_mbytes_per_sec": 0 00:15:33.321 }, 00:15:33.321 "claimed": true, 00:15:33.321 "claim_type": "exclusive_write", 00:15:33.321 "zoned": false, 00:15:33.321 "supported_io_types": { 00:15:33.321 "read": true, 00:15:33.321 "write": true, 00:15:33.321 "unmap": true, 00:15:33.321 "flush": true, 00:15:33.321 "reset": true, 00:15:33.321 "nvme_admin": false, 00:15:33.321 "nvme_io": false, 00:15:33.321 "nvme_io_md": false, 00:15:33.321 "write_zeroes": true, 00:15:33.321 "zcopy": true, 00:15:33.321 "get_zone_info": false, 00:15:33.321 "zone_management": false, 00:15:33.321 "zone_append": false, 00:15:33.321 "compare": false, 00:15:33.321 "compare_and_write": false, 00:15:33.321 "abort": true, 00:15:33.321 "seek_hole": false, 00:15:33.321 "seek_data": false, 00:15:33.321 "copy": true, 00:15:33.321 "nvme_iov_md": false 00:15:33.321 }, 00:15:33.321 "memory_domains": [ 00:15:33.321 { 00:15:33.321 "dma_device_id": "system", 00:15:33.321 "dma_device_type": 1 00:15:33.321 }, 00:15:33.321 { 00:15:33.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.321 "dma_device_type": 2 00:15:33.321 } 00:15:33.321 ], 00:15:33.321 "driver_specific": {} 00:15:33.321 } 00:15:33.321 ] 00:15:33.321 18:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:33.321 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:33.321 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.321 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:33.321 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:33.321 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:33.321 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:33.321 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.321 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.321 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.321 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.321 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.321 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.580 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.580 "name": "Existed_Raid", 00:15:33.580 "uuid": "0abc116c-78a6-4a31-ad2c-6b6e0e7d7eb2", 00:15:33.580 "strip_size_kb": 64, 00:15:33.580 "state": "configuring", 00:15:33.580 "raid_level": "raid0", 00:15:33.580 "superblock": true, 00:15:33.580 "num_base_bdevs": 4, 00:15:33.580 "num_base_bdevs_discovered": 3, 00:15:33.580 "num_base_bdevs_operational": 4, 00:15:33.580 "base_bdevs_list": [ 00:15:33.580 { 00:15:33.580 "name": "BaseBdev1", 00:15:33.580 "uuid": "5f6b5c1d-dd73-43a1-999d-c4cc6995053d", 00:15:33.580 "is_configured": true, 00:15:33.580 "data_offset": 2048, 00:15:33.580 "data_size": 63488 00:15:33.580 }, 00:15:33.580 { 00:15:33.580 "name": null, 00:15:33.580 "uuid": "af2f24e7-04b7-41a3-b7fe-52e9d007fe67", 00:15:33.580 "is_configured": false, 00:15:33.580 "data_offset": 2048, 00:15:33.580 "data_size": 63488 00:15:33.580 }, 00:15:33.580 { 00:15:33.580 "name": "BaseBdev3", 00:15:33.580 "uuid": "c04baeeb-a1b9-40b4-8e56-dfe8ffdc189e", 00:15:33.580 "is_configured": true, 00:15:33.580 "data_offset": 2048, 00:15:33.580 "data_size": 63488 00:15:33.580 }, 00:15:33.580 { 00:15:33.580 "name": "BaseBdev4", 00:15:33.580 "uuid": "5e66fba7-2b44-4849-bbba-5b80d4f071cb", 00:15:33.580 "is_configured": true, 00:15:33.580 "data_offset": 2048, 00:15:33.580 "data_size": 63488 00:15:33.580 } 00:15:33.580 ] 00:15:33.580 }' 00:15:33.580 18:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.580 18:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.839 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:33.839 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.099 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:34.099 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:34.358 [2024-07-24 18:18:42.750278] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.358 "name": "Existed_Raid", 00:15:34.358 "uuid": "0abc116c-78a6-4a31-ad2c-6b6e0e7d7eb2", 00:15:34.358 "strip_size_kb": 64, 00:15:34.358 "state": "configuring", 00:15:34.358 "raid_level": "raid0", 00:15:34.358 "superblock": true, 00:15:34.358 "num_base_bdevs": 4, 00:15:34.358 "num_base_bdevs_discovered": 2, 00:15:34.358 "num_base_bdevs_operational": 4, 00:15:34.358 "base_bdevs_list": [ 00:15:34.358 { 00:15:34.358 "name": "BaseBdev1", 00:15:34.358 "uuid": "5f6b5c1d-dd73-43a1-999d-c4cc6995053d", 00:15:34.358 "is_configured": true, 00:15:34.358 "data_offset": 2048, 00:15:34.358 "data_size": 63488 00:15:34.358 }, 00:15:34.358 { 00:15:34.358 "name": null, 00:15:34.358 "uuid": "af2f24e7-04b7-41a3-b7fe-52e9d007fe67", 00:15:34.358 "is_configured": false, 00:15:34.358 "data_offset": 2048, 00:15:34.358 "data_size": 63488 00:15:34.358 }, 00:15:34.358 { 00:15:34.358 "name": null, 00:15:34.358 "uuid": "c04baeeb-a1b9-40b4-8e56-dfe8ffdc189e", 00:15:34.358 "is_configured": false, 00:15:34.358 "data_offset": 2048, 00:15:34.358 "data_size": 63488 00:15:34.358 }, 00:15:34.358 { 00:15:34.358 "name": "BaseBdev4", 00:15:34.358 "uuid": "5e66fba7-2b44-4849-bbba-5b80d4f071cb", 00:15:34.358 "is_configured": true, 00:15:34.358 "data_offset": 2048, 00:15:34.358 "data_size": 63488 00:15:34.358 } 00:15:34.358 ] 00:15:34.358 }' 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.358 18:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.934 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:34.934 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:35.194 [2024-07-24 18:18:43.728811] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.194 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.454 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.454 "name": "Existed_Raid", 00:15:35.454 "uuid": "0abc116c-78a6-4a31-ad2c-6b6e0e7d7eb2", 00:15:35.454 "strip_size_kb": 64, 00:15:35.454 "state": "configuring", 00:15:35.454 "raid_level": "raid0", 00:15:35.454 "superblock": true, 00:15:35.454 "num_base_bdevs": 4, 00:15:35.454 "num_base_bdevs_discovered": 3, 00:15:35.454 "num_base_bdevs_operational": 4, 00:15:35.454 "base_bdevs_list": [ 00:15:35.454 { 00:15:35.454 "name": "BaseBdev1", 00:15:35.454 "uuid": "5f6b5c1d-dd73-43a1-999d-c4cc6995053d", 00:15:35.454 "is_configured": true, 00:15:35.454 "data_offset": 2048, 00:15:35.454 "data_size": 63488 00:15:35.454 }, 00:15:35.454 { 00:15:35.454 "name": null, 00:15:35.454 "uuid": "af2f24e7-04b7-41a3-b7fe-52e9d007fe67", 00:15:35.454 "is_configured": false, 00:15:35.454 "data_offset": 2048, 00:15:35.454 "data_size": 63488 00:15:35.454 }, 00:15:35.454 { 00:15:35.454 "name": "BaseBdev3", 00:15:35.454 "uuid": "c04baeeb-a1b9-40b4-8e56-dfe8ffdc189e", 00:15:35.454 "is_configured": true, 00:15:35.454 "data_offset": 2048, 00:15:35.454 "data_size": 63488 00:15:35.454 }, 00:15:35.454 { 00:15:35.454 "name": "BaseBdev4", 00:15:35.454 "uuid": "5e66fba7-2b44-4849-bbba-5b80d4f071cb", 00:15:35.454 "is_configured": true, 00:15:35.454 "data_offset": 2048, 00:15:35.454 "data_size": 63488 00:15:35.454 } 00:15:35.454 ] 00:15:35.454 }' 00:15:35.454 18:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.454 18:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.020 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.020 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:36.020 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:36.021 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:36.279 [2024-07-24 18:18:44.699331] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.279 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:36.279 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:36.279 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:36.279 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:36.279 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:36.279 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:36.279 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.279 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.279 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.279 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.279 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.279 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.538 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.538 "name": "Existed_Raid", 00:15:36.538 "uuid": "0abc116c-78a6-4a31-ad2c-6b6e0e7d7eb2", 00:15:36.538 "strip_size_kb": 64, 00:15:36.538 "state": "configuring", 00:15:36.538 "raid_level": "raid0", 00:15:36.538 "superblock": true, 00:15:36.538 "num_base_bdevs": 4, 00:15:36.538 "num_base_bdevs_discovered": 2, 00:15:36.538 "num_base_bdevs_operational": 4, 00:15:36.538 "base_bdevs_list": [ 00:15:36.538 { 00:15:36.538 "name": null, 00:15:36.538 "uuid": "5f6b5c1d-dd73-43a1-999d-c4cc6995053d", 00:15:36.538 "is_configured": false, 00:15:36.538 "data_offset": 2048, 00:15:36.538 "data_size": 63488 00:15:36.538 }, 00:15:36.538 { 00:15:36.538 "name": null, 00:15:36.538 "uuid": "af2f24e7-04b7-41a3-b7fe-52e9d007fe67", 00:15:36.538 "is_configured": false, 00:15:36.538 "data_offset": 2048, 00:15:36.538 "data_size": 63488 00:15:36.538 }, 00:15:36.538 { 00:15:36.538 "name": "BaseBdev3", 00:15:36.538 "uuid": "c04baeeb-a1b9-40b4-8e56-dfe8ffdc189e", 00:15:36.538 "is_configured": true, 00:15:36.538 "data_offset": 2048, 00:15:36.538 "data_size": 63488 00:15:36.538 }, 00:15:36.538 { 00:15:36.538 "name": "BaseBdev4", 00:15:36.538 "uuid": "5e66fba7-2b44-4849-bbba-5b80d4f071cb", 00:15:36.538 "is_configured": true, 00:15:36.538 "data_offset": 2048, 00:15:36.538 "data_size": 63488 00:15:36.538 } 00:15:36.538 ] 00:15:36.538 }' 00:15:36.538 18:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.538 18:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.797 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.797 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:37.055 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:37.055 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:37.315 [2024-07-24 18:18:45.679349] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:37.315 "name": "Existed_Raid", 00:15:37.315 "uuid": "0abc116c-78a6-4a31-ad2c-6b6e0e7d7eb2", 00:15:37.315 "strip_size_kb": 64, 00:15:37.315 "state": "configuring", 00:15:37.315 "raid_level": "raid0", 00:15:37.315 "superblock": true, 00:15:37.315 "num_base_bdevs": 4, 00:15:37.315 "num_base_bdevs_discovered": 3, 00:15:37.315 "num_base_bdevs_operational": 4, 00:15:37.315 "base_bdevs_list": [ 00:15:37.315 { 00:15:37.315 "name": null, 00:15:37.315 "uuid": "5f6b5c1d-dd73-43a1-999d-c4cc6995053d", 00:15:37.315 "is_configured": false, 00:15:37.315 "data_offset": 2048, 00:15:37.315 "data_size": 63488 00:15:37.315 }, 00:15:37.315 { 00:15:37.315 "name": "BaseBdev2", 00:15:37.315 "uuid": "af2f24e7-04b7-41a3-b7fe-52e9d007fe67", 00:15:37.315 "is_configured": true, 00:15:37.315 "data_offset": 2048, 00:15:37.315 "data_size": 63488 00:15:37.315 }, 00:15:37.315 { 00:15:37.315 "name": "BaseBdev3", 00:15:37.315 "uuid": "c04baeeb-a1b9-40b4-8e56-dfe8ffdc189e", 00:15:37.315 "is_configured": true, 00:15:37.315 "data_offset": 2048, 00:15:37.315 "data_size": 63488 00:15:37.315 }, 00:15:37.315 { 00:15:37.315 "name": "BaseBdev4", 00:15:37.315 "uuid": "5e66fba7-2b44-4849-bbba-5b80d4f071cb", 00:15:37.315 "is_configured": true, 00:15:37.315 "data_offset": 2048, 00:15:37.315 "data_size": 63488 00:15:37.315 } 00:15:37.315 ] 00:15:37.315 }' 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:37.315 18:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.882 18:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.882 18:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:38.140 18:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:38.140 18:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.140 18:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:38.140 18:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5f6b5c1d-dd73-43a1-999d-c4cc6995053d 00:15:38.398 [2024-07-24 18:18:46.844995] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:38.398 [2024-07-24 18:18:46.845116] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x201b490 00:15:38.398 [2024-07-24 18:18:46.845125] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:38.398 [2024-07-24 18:18:46.845240] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f390d0 00:15:38.398 [2024-07-24 18:18:46.845317] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x201b490 00:15:38.398 [2024-07-24 18:18:46.845323] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x201b490 00:15:38.398 [2024-07-24 18:18:46.845383] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.398 NewBaseBdev 00:15:38.398 18:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:38.398 18:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:38.398 18:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:38.398 18:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:38.398 18:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:38.398 18:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:38.398 18:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:38.657 [ 00:15:38.657 { 00:15:38.657 "name": "NewBaseBdev", 00:15:38.657 "aliases": [ 00:15:38.657 "5f6b5c1d-dd73-43a1-999d-c4cc6995053d" 00:15:38.657 ], 00:15:38.657 "product_name": "Malloc disk", 00:15:38.657 "block_size": 512, 00:15:38.657 "num_blocks": 65536, 00:15:38.657 "uuid": "5f6b5c1d-dd73-43a1-999d-c4cc6995053d", 00:15:38.657 "assigned_rate_limits": { 00:15:38.657 "rw_ios_per_sec": 0, 00:15:38.657 "rw_mbytes_per_sec": 0, 00:15:38.657 "r_mbytes_per_sec": 0, 00:15:38.657 "w_mbytes_per_sec": 0 00:15:38.657 }, 00:15:38.657 "claimed": true, 00:15:38.657 "claim_type": "exclusive_write", 00:15:38.657 "zoned": false, 00:15:38.657 "supported_io_types": { 00:15:38.657 "read": true, 00:15:38.657 "write": true, 00:15:38.657 "unmap": true, 00:15:38.657 "flush": true, 00:15:38.657 "reset": true, 00:15:38.657 "nvme_admin": false, 00:15:38.657 "nvme_io": false, 00:15:38.657 "nvme_io_md": false, 00:15:38.657 "write_zeroes": true, 00:15:38.657 "zcopy": true, 00:15:38.657 "get_zone_info": false, 00:15:38.657 "zone_management": false, 00:15:38.657 "zone_append": false, 00:15:38.657 "compare": false, 00:15:38.657 "compare_and_write": false, 00:15:38.657 "abort": true, 00:15:38.657 "seek_hole": false, 00:15:38.657 "seek_data": false, 00:15:38.657 "copy": true, 00:15:38.657 "nvme_iov_md": false 00:15:38.657 }, 00:15:38.657 "memory_domains": [ 00:15:38.657 { 00:15:38.657 "dma_device_id": "system", 00:15:38.657 "dma_device_type": 1 00:15:38.657 }, 00:15:38.657 { 00:15:38.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.657 "dma_device_type": 2 00:15:38.657 } 00:15:38.657 ], 00:15:38.657 "driver_specific": {} 00:15:38.657 } 00:15:38.657 ] 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.657 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.916 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:38.916 "name": "Existed_Raid", 00:15:38.916 "uuid": "0abc116c-78a6-4a31-ad2c-6b6e0e7d7eb2", 00:15:38.916 "strip_size_kb": 64, 00:15:38.916 "state": "online", 00:15:38.916 "raid_level": "raid0", 00:15:38.916 "superblock": true, 00:15:38.916 "num_base_bdevs": 4, 00:15:38.916 "num_base_bdevs_discovered": 4, 00:15:38.916 "num_base_bdevs_operational": 4, 00:15:38.916 "base_bdevs_list": [ 00:15:38.916 { 00:15:38.916 "name": "NewBaseBdev", 00:15:38.916 "uuid": "5f6b5c1d-dd73-43a1-999d-c4cc6995053d", 00:15:38.916 "is_configured": true, 00:15:38.916 "data_offset": 2048, 00:15:38.916 "data_size": 63488 00:15:38.916 }, 00:15:38.916 { 00:15:38.916 "name": "BaseBdev2", 00:15:38.916 "uuid": "af2f24e7-04b7-41a3-b7fe-52e9d007fe67", 00:15:38.916 "is_configured": true, 00:15:38.916 "data_offset": 2048, 00:15:38.916 "data_size": 63488 00:15:38.916 }, 00:15:38.916 { 00:15:38.916 "name": "BaseBdev3", 00:15:38.916 "uuid": "c04baeeb-a1b9-40b4-8e56-dfe8ffdc189e", 00:15:38.916 "is_configured": true, 00:15:38.916 "data_offset": 2048, 00:15:38.916 "data_size": 63488 00:15:38.916 }, 00:15:38.916 { 00:15:38.916 "name": "BaseBdev4", 00:15:38.916 "uuid": "5e66fba7-2b44-4849-bbba-5b80d4f071cb", 00:15:38.916 "is_configured": true, 00:15:38.916 "data_offset": 2048, 00:15:38.916 "data_size": 63488 00:15:38.916 } 00:15:38.916 ] 00:15:38.916 }' 00:15:38.916 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:38.916 18:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.483 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:39.483 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:39.483 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:39.483 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:39.483 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:39.483 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:39.483 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:39.483 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:39.483 [2024-07-24 18:18:47.948031] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.483 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:39.483 "name": "Existed_Raid", 00:15:39.483 "aliases": [ 00:15:39.483 "0abc116c-78a6-4a31-ad2c-6b6e0e7d7eb2" 00:15:39.483 ], 00:15:39.483 "product_name": "Raid Volume", 00:15:39.483 "block_size": 512, 00:15:39.483 "num_blocks": 253952, 00:15:39.483 "uuid": "0abc116c-78a6-4a31-ad2c-6b6e0e7d7eb2", 00:15:39.483 "assigned_rate_limits": { 00:15:39.483 "rw_ios_per_sec": 0, 00:15:39.483 "rw_mbytes_per_sec": 0, 00:15:39.483 "r_mbytes_per_sec": 0, 00:15:39.483 "w_mbytes_per_sec": 0 00:15:39.483 }, 00:15:39.483 "claimed": false, 00:15:39.483 "zoned": false, 00:15:39.483 "supported_io_types": { 00:15:39.483 "read": true, 00:15:39.483 "write": true, 00:15:39.483 "unmap": true, 00:15:39.483 "flush": true, 00:15:39.483 "reset": true, 00:15:39.483 "nvme_admin": false, 00:15:39.483 "nvme_io": false, 00:15:39.483 "nvme_io_md": false, 00:15:39.483 "write_zeroes": true, 00:15:39.483 "zcopy": false, 00:15:39.483 "get_zone_info": false, 00:15:39.483 "zone_management": false, 00:15:39.483 "zone_append": false, 00:15:39.483 "compare": false, 00:15:39.483 "compare_and_write": false, 00:15:39.483 "abort": false, 00:15:39.483 "seek_hole": false, 00:15:39.483 "seek_data": false, 00:15:39.483 "copy": false, 00:15:39.483 "nvme_iov_md": false 00:15:39.483 }, 00:15:39.483 "memory_domains": [ 00:15:39.483 { 00:15:39.483 "dma_device_id": "system", 00:15:39.483 "dma_device_type": 1 00:15:39.483 }, 00:15:39.483 { 00:15:39.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.483 "dma_device_type": 2 00:15:39.483 }, 00:15:39.483 { 00:15:39.483 "dma_device_id": "system", 00:15:39.483 "dma_device_type": 1 00:15:39.483 }, 00:15:39.483 { 00:15:39.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.483 "dma_device_type": 2 00:15:39.483 }, 00:15:39.483 { 00:15:39.483 "dma_device_id": "system", 00:15:39.483 "dma_device_type": 1 00:15:39.483 }, 00:15:39.483 { 00:15:39.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.483 "dma_device_type": 2 00:15:39.483 }, 00:15:39.483 { 00:15:39.483 "dma_device_id": "system", 00:15:39.483 "dma_device_type": 1 00:15:39.483 }, 00:15:39.483 { 00:15:39.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.483 "dma_device_type": 2 00:15:39.483 } 00:15:39.483 ], 00:15:39.483 "driver_specific": { 00:15:39.483 "raid": { 00:15:39.483 "uuid": "0abc116c-78a6-4a31-ad2c-6b6e0e7d7eb2", 00:15:39.483 "strip_size_kb": 64, 00:15:39.483 "state": "online", 00:15:39.483 "raid_level": "raid0", 00:15:39.483 "superblock": true, 00:15:39.483 "num_base_bdevs": 4, 00:15:39.483 "num_base_bdevs_discovered": 4, 00:15:39.483 "num_base_bdevs_operational": 4, 00:15:39.483 "base_bdevs_list": [ 00:15:39.483 { 00:15:39.483 "name": "NewBaseBdev", 00:15:39.483 "uuid": "5f6b5c1d-dd73-43a1-999d-c4cc6995053d", 00:15:39.483 "is_configured": true, 00:15:39.483 "data_offset": 2048, 00:15:39.483 "data_size": 63488 00:15:39.483 }, 00:15:39.483 { 00:15:39.483 "name": "BaseBdev2", 00:15:39.483 "uuid": "af2f24e7-04b7-41a3-b7fe-52e9d007fe67", 00:15:39.483 "is_configured": true, 00:15:39.483 "data_offset": 2048, 00:15:39.483 "data_size": 63488 00:15:39.483 }, 00:15:39.483 { 00:15:39.483 "name": "BaseBdev3", 00:15:39.483 "uuid": "c04baeeb-a1b9-40b4-8e56-dfe8ffdc189e", 00:15:39.483 "is_configured": true, 00:15:39.483 "data_offset": 2048, 00:15:39.483 "data_size": 63488 00:15:39.483 }, 00:15:39.484 { 00:15:39.484 "name": "BaseBdev4", 00:15:39.484 "uuid": "5e66fba7-2b44-4849-bbba-5b80d4f071cb", 00:15:39.484 "is_configured": true, 00:15:39.484 "data_offset": 2048, 00:15:39.484 "data_size": 63488 00:15:39.484 } 00:15:39.484 ] 00:15:39.484 } 00:15:39.484 } 00:15:39.484 }' 00:15:39.484 18:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.484 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:39.484 BaseBdev2 00:15:39.484 BaseBdev3 00:15:39.484 BaseBdev4' 00:15:39.484 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:39.484 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:39.484 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:39.742 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:39.742 "name": "NewBaseBdev", 00:15:39.742 "aliases": [ 00:15:39.742 "5f6b5c1d-dd73-43a1-999d-c4cc6995053d" 00:15:39.742 ], 00:15:39.742 "product_name": "Malloc disk", 00:15:39.742 "block_size": 512, 00:15:39.742 "num_blocks": 65536, 00:15:39.742 "uuid": "5f6b5c1d-dd73-43a1-999d-c4cc6995053d", 00:15:39.742 "assigned_rate_limits": { 00:15:39.742 "rw_ios_per_sec": 0, 00:15:39.742 "rw_mbytes_per_sec": 0, 00:15:39.742 "r_mbytes_per_sec": 0, 00:15:39.742 "w_mbytes_per_sec": 0 00:15:39.742 }, 00:15:39.742 "claimed": true, 00:15:39.742 "claim_type": "exclusive_write", 00:15:39.742 "zoned": false, 00:15:39.742 "supported_io_types": { 00:15:39.742 "read": true, 00:15:39.742 "write": true, 00:15:39.742 "unmap": true, 00:15:39.742 "flush": true, 00:15:39.742 "reset": true, 00:15:39.742 "nvme_admin": false, 00:15:39.742 "nvme_io": false, 00:15:39.742 "nvme_io_md": false, 00:15:39.742 "write_zeroes": true, 00:15:39.742 "zcopy": true, 00:15:39.742 "get_zone_info": false, 00:15:39.742 "zone_management": false, 00:15:39.742 "zone_append": false, 00:15:39.742 "compare": false, 00:15:39.742 "compare_and_write": false, 00:15:39.742 "abort": true, 00:15:39.742 "seek_hole": false, 00:15:39.742 "seek_data": false, 00:15:39.742 "copy": true, 00:15:39.742 "nvme_iov_md": false 00:15:39.742 }, 00:15:39.742 "memory_domains": [ 00:15:39.742 { 00:15:39.742 "dma_device_id": "system", 00:15:39.742 "dma_device_type": 1 00:15:39.742 }, 00:15:39.742 { 00:15:39.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.742 "dma_device_type": 2 00:15:39.742 } 00:15:39.742 ], 00:15:39.742 "driver_specific": {} 00:15:39.742 }' 00:15:39.742 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.742 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.742 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:39.742 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.742 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.742 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:39.742 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.001 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.001 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.001 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.001 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.001 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.001 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.001 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:40.001 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.260 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.260 "name": "BaseBdev2", 00:15:40.260 "aliases": [ 00:15:40.260 "af2f24e7-04b7-41a3-b7fe-52e9d007fe67" 00:15:40.260 ], 00:15:40.260 "product_name": "Malloc disk", 00:15:40.260 "block_size": 512, 00:15:40.260 "num_blocks": 65536, 00:15:40.260 "uuid": "af2f24e7-04b7-41a3-b7fe-52e9d007fe67", 00:15:40.260 "assigned_rate_limits": { 00:15:40.260 "rw_ios_per_sec": 0, 00:15:40.260 "rw_mbytes_per_sec": 0, 00:15:40.260 "r_mbytes_per_sec": 0, 00:15:40.260 "w_mbytes_per_sec": 0 00:15:40.260 }, 00:15:40.260 "claimed": true, 00:15:40.260 "claim_type": "exclusive_write", 00:15:40.260 "zoned": false, 00:15:40.260 "supported_io_types": { 00:15:40.260 "read": true, 00:15:40.260 "write": true, 00:15:40.260 "unmap": true, 00:15:40.260 "flush": true, 00:15:40.260 "reset": true, 00:15:40.260 "nvme_admin": false, 00:15:40.260 "nvme_io": false, 00:15:40.260 "nvme_io_md": false, 00:15:40.260 "write_zeroes": true, 00:15:40.260 "zcopy": true, 00:15:40.260 "get_zone_info": false, 00:15:40.260 "zone_management": false, 00:15:40.260 "zone_append": false, 00:15:40.260 "compare": false, 00:15:40.260 "compare_and_write": false, 00:15:40.260 "abort": true, 00:15:40.260 "seek_hole": false, 00:15:40.260 "seek_data": false, 00:15:40.260 "copy": true, 00:15:40.260 "nvme_iov_md": false 00:15:40.260 }, 00:15:40.260 "memory_domains": [ 00:15:40.260 { 00:15:40.260 "dma_device_id": "system", 00:15:40.260 "dma_device_type": 1 00:15:40.260 }, 00:15:40.260 { 00:15:40.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.260 "dma_device_type": 2 00:15:40.260 } 00:15:40.260 ], 00:15:40.260 "driver_specific": {} 00:15:40.260 }' 00:15:40.260 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.260 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.260 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.260 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.260 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.260 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:40.260 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.260 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.519 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.519 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.519 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.519 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.519 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.519 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:40.519 18:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.778 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.778 "name": "BaseBdev3", 00:15:40.778 "aliases": [ 00:15:40.778 "c04baeeb-a1b9-40b4-8e56-dfe8ffdc189e" 00:15:40.778 ], 00:15:40.778 "product_name": "Malloc disk", 00:15:40.778 "block_size": 512, 00:15:40.778 "num_blocks": 65536, 00:15:40.778 "uuid": "c04baeeb-a1b9-40b4-8e56-dfe8ffdc189e", 00:15:40.778 "assigned_rate_limits": { 00:15:40.778 "rw_ios_per_sec": 0, 00:15:40.778 "rw_mbytes_per_sec": 0, 00:15:40.778 "r_mbytes_per_sec": 0, 00:15:40.778 "w_mbytes_per_sec": 0 00:15:40.778 }, 00:15:40.778 "claimed": true, 00:15:40.778 "claim_type": "exclusive_write", 00:15:40.778 "zoned": false, 00:15:40.778 "supported_io_types": { 00:15:40.778 "read": true, 00:15:40.778 "write": true, 00:15:40.778 "unmap": true, 00:15:40.778 "flush": true, 00:15:40.778 "reset": true, 00:15:40.778 "nvme_admin": false, 00:15:40.778 "nvme_io": false, 00:15:40.778 "nvme_io_md": false, 00:15:40.778 "write_zeroes": true, 00:15:40.778 "zcopy": true, 00:15:40.778 "get_zone_info": false, 00:15:40.778 "zone_management": false, 00:15:40.778 "zone_append": false, 00:15:40.778 "compare": false, 00:15:40.778 "compare_and_write": false, 00:15:40.778 "abort": true, 00:15:40.778 "seek_hole": false, 00:15:40.778 "seek_data": false, 00:15:40.778 "copy": true, 00:15:40.778 "nvme_iov_md": false 00:15:40.778 }, 00:15:40.778 "memory_domains": [ 00:15:40.778 { 00:15:40.778 "dma_device_id": "system", 00:15:40.778 "dma_device_type": 1 00:15:40.778 }, 00:15:40.778 { 00:15:40.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.778 "dma_device_type": 2 00:15:40.778 } 00:15:40.778 ], 00:15:40.778 "driver_specific": {} 00:15:40.778 }' 00:15:40.778 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.778 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.778 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.778 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.778 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.778 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:40.778 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.778 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.778 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.778 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.037 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.038 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.038 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:41.038 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:41.038 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:41.038 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:41.038 "name": "BaseBdev4", 00:15:41.038 "aliases": [ 00:15:41.038 "5e66fba7-2b44-4849-bbba-5b80d4f071cb" 00:15:41.038 ], 00:15:41.038 "product_name": "Malloc disk", 00:15:41.038 "block_size": 512, 00:15:41.038 "num_blocks": 65536, 00:15:41.038 "uuid": "5e66fba7-2b44-4849-bbba-5b80d4f071cb", 00:15:41.038 "assigned_rate_limits": { 00:15:41.038 "rw_ios_per_sec": 0, 00:15:41.038 "rw_mbytes_per_sec": 0, 00:15:41.038 "r_mbytes_per_sec": 0, 00:15:41.038 "w_mbytes_per_sec": 0 00:15:41.038 }, 00:15:41.038 "claimed": true, 00:15:41.038 "claim_type": "exclusive_write", 00:15:41.038 "zoned": false, 00:15:41.038 "supported_io_types": { 00:15:41.038 "read": true, 00:15:41.038 "write": true, 00:15:41.038 "unmap": true, 00:15:41.038 "flush": true, 00:15:41.038 "reset": true, 00:15:41.038 "nvme_admin": false, 00:15:41.038 "nvme_io": false, 00:15:41.038 "nvme_io_md": false, 00:15:41.038 "write_zeroes": true, 00:15:41.038 "zcopy": true, 00:15:41.038 "get_zone_info": false, 00:15:41.038 "zone_management": false, 00:15:41.038 "zone_append": false, 00:15:41.038 "compare": false, 00:15:41.038 "compare_and_write": false, 00:15:41.038 "abort": true, 00:15:41.038 "seek_hole": false, 00:15:41.038 "seek_data": false, 00:15:41.038 "copy": true, 00:15:41.038 "nvme_iov_md": false 00:15:41.038 }, 00:15:41.038 "memory_domains": [ 00:15:41.038 { 00:15:41.038 "dma_device_id": "system", 00:15:41.038 "dma_device_type": 1 00:15:41.038 }, 00:15:41.038 { 00:15:41.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.038 "dma_device_type": 2 00:15:41.038 } 00:15:41.038 ], 00:15:41.038 "driver_specific": {} 00:15:41.038 }' 00:15:41.038 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.297 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.297 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.297 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.297 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.297 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.297 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.297 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.297 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.297 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.297 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.558 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.558 18:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:41.558 [2024-07-24 18:18:50.081347] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:41.558 [2024-07-24 18:18:50.081368] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.558 [2024-07-24 18:18:50.081410] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.558 [2024-07-24 18:18:50.081454] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.558 [2024-07-24 18:18:50.081463] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x201b490 name Existed_Raid, state offline 00:15:41.558 18:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2217076 00:15:41.558 18:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2217076 ']' 00:15:41.558 18:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2217076 00:15:41.558 18:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:41.558 18:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:41.558 18:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2217076 00:15:41.558 18:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:41.558 18:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:41.558 18:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2217076' 00:15:41.558 killing process with pid 2217076 00:15:41.558 18:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2217076 00:15:41.558 [2024-07-24 18:18:50.150267] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.558 18:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2217076 00:15:41.879 [2024-07-24 18:18:50.181595] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.879 18:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:41.879 00:15:41.879 real 0m24.252s 00:15:41.879 user 0m44.363s 00:15:41.879 sys 0m4.659s 00:15:41.879 18:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.879 18:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.879 ************************************ 00:15:41.879 END TEST raid_state_function_test_sb 00:15:41.879 ************************************ 00:15:41.879 18:18:50 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:15:41.879 18:18:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:41.879 18:18:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.879 18:18:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.879 ************************************ 00:15:41.879 START TEST raid_superblock_test 00:15:41.879 ************************************ 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2221935 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2221935 /var/tmp/spdk-raid.sock 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2221935 ']' 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.879 18:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:41.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:42.138 18:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.138 18:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.138 [2024-07-24 18:18:50.489991] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:15:42.138 [2024-07-24 18:18:50.490037] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221935 ] 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:01.0 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:01.1 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:01.2 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:01.3 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:01.4 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:01.5 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:01.6 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:01.7 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:02.0 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:02.1 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:02.2 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:02.3 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:02.4 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:02.5 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:02.6 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b3:02.7 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:01.0 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:01.1 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:01.2 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:01.3 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:01.4 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:01.5 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:01.6 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:01.7 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:02.0 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:02.1 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:02.2 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:02.3 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:02.4 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:02.5 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:02.6 cannot be used 00:15:42.138 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.138 EAL: Requested device 0000:b5:02.7 cannot be used 00:15:42.138 [2024-07-24 18:18:50.582396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.138 [2024-07-24 18:18:50.656612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.138 [2024-07-24 18:18:50.710306] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.138 [2024-07-24 18:18:50.710335] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.705 18:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:42.705 18:18:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:42.705 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:42.705 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:42.705 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:42.705 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:42.705 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:42.705 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.705 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.705 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.705 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:42.964 malloc1 00:15:42.964 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:43.223 [2024-07-24 18:18:51.602797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:43.223 [2024-07-24 18:18:51.602831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.223 [2024-07-24 18:18:51.602846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10c0cb0 00:15:43.223 [2024-07-24 18:18:51.602854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.223 [2024-07-24 18:18:51.603938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.223 [2024-07-24 18:18:51.603961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:43.223 pt1 00:15:43.223 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:43.223 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:43.223 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:43.223 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:43.223 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:43.223 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:43.223 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:43.223 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:43.223 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:43.223 malloc2 00:15:43.223 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:43.481 [2024-07-24 18:18:51.943435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:43.482 [2024-07-24 18:18:51.943471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.482 [2024-07-24 18:18:51.943483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10c20b0 00:15:43.482 [2024-07-24 18:18:51.943492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.482 [2024-07-24 18:18:51.944585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.482 [2024-07-24 18:18:51.944609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:43.482 pt2 00:15:43.482 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:43.482 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:43.482 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:15:43.482 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:15:43.482 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:43.482 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:43.482 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:43.482 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:43.482 18:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:43.740 malloc3 00:15:43.740 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:43.740 [2024-07-24 18:18:52.295873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:43.740 [2024-07-24 18:18:52.295905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.740 [2024-07-24 18:18:52.295916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1258a80 00:15:43.740 [2024-07-24 18:18:52.295924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.740 [2024-07-24 18:18:52.296935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.740 [2024-07-24 18:18:52.296956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:43.740 pt3 00:15:43.740 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:43.740 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:43.740 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:15:43.740 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:15:43.740 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:43.741 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:43.741 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:43.741 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:43.741 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:15:43.999 malloc4 00:15:43.999 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:44.258 [2024-07-24 18:18:52.624334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:44.258 [2024-07-24 18:18:52.624366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.258 [2024-07-24 18:18:52.624381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x125b3a0 00:15:44.258 [2024-07-24 18:18:52.624389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.258 [2024-07-24 18:18:52.625345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.258 [2024-07-24 18:18:52.625367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:44.258 pt4 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:15:44.258 [2024-07-24 18:18:52.788780] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:44.258 [2024-07-24 18:18:52.789618] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:44.258 [2024-07-24 18:18:52.789662] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:44.258 [2024-07-24 18:18:52.789689] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:44.258 [2024-07-24 18:18:52.789799] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x10b8c70 00:15:44.258 [2024-07-24 18:18:52.789806] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:44.258 [2024-07-24 18:18:52.789936] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10b6eb0 00:15:44.258 [2024-07-24 18:18:52.790035] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10b8c70 00:15:44.258 [2024-07-24 18:18:52.790041] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10b8c70 00:15:44.258 [2024-07-24 18:18:52.790103] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.258 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.517 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:44.517 "name": "raid_bdev1", 00:15:44.517 "uuid": "19b81204-4205-490a-8fc6-429d171bc38d", 00:15:44.517 "strip_size_kb": 64, 00:15:44.517 "state": "online", 00:15:44.517 "raid_level": "raid0", 00:15:44.517 "superblock": true, 00:15:44.517 "num_base_bdevs": 4, 00:15:44.517 "num_base_bdevs_discovered": 4, 00:15:44.517 "num_base_bdevs_operational": 4, 00:15:44.517 "base_bdevs_list": [ 00:15:44.517 { 00:15:44.517 "name": "pt1", 00:15:44.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.517 "is_configured": true, 00:15:44.517 "data_offset": 2048, 00:15:44.517 "data_size": 63488 00:15:44.517 }, 00:15:44.517 { 00:15:44.517 "name": "pt2", 00:15:44.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.517 "is_configured": true, 00:15:44.517 "data_offset": 2048, 00:15:44.517 "data_size": 63488 00:15:44.517 }, 00:15:44.517 { 00:15:44.517 "name": "pt3", 00:15:44.517 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.517 "is_configured": true, 00:15:44.517 "data_offset": 2048, 00:15:44.517 "data_size": 63488 00:15:44.517 }, 00:15:44.517 { 00:15:44.517 "name": "pt4", 00:15:44.517 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:44.517 "is_configured": true, 00:15:44.517 "data_offset": 2048, 00:15:44.517 "data_size": 63488 00:15:44.517 } 00:15:44.517 ] 00:15:44.517 }' 00:15:44.517 18:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:44.517 18:18:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.085 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:45.085 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:45.085 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:45.085 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:45.085 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:45.085 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:45.085 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:45.085 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:45.085 [2024-07-24 18:18:53.631132] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.085 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:45.085 "name": "raid_bdev1", 00:15:45.085 "aliases": [ 00:15:45.085 "19b81204-4205-490a-8fc6-429d171bc38d" 00:15:45.085 ], 00:15:45.085 "product_name": "Raid Volume", 00:15:45.085 "block_size": 512, 00:15:45.085 "num_blocks": 253952, 00:15:45.085 "uuid": "19b81204-4205-490a-8fc6-429d171bc38d", 00:15:45.085 "assigned_rate_limits": { 00:15:45.085 "rw_ios_per_sec": 0, 00:15:45.085 "rw_mbytes_per_sec": 0, 00:15:45.085 "r_mbytes_per_sec": 0, 00:15:45.085 "w_mbytes_per_sec": 0 00:15:45.085 }, 00:15:45.085 "claimed": false, 00:15:45.085 "zoned": false, 00:15:45.085 "supported_io_types": { 00:15:45.085 "read": true, 00:15:45.085 "write": true, 00:15:45.085 "unmap": true, 00:15:45.085 "flush": true, 00:15:45.085 "reset": true, 00:15:45.085 "nvme_admin": false, 00:15:45.085 "nvme_io": false, 00:15:45.085 "nvme_io_md": false, 00:15:45.085 "write_zeroes": true, 00:15:45.085 "zcopy": false, 00:15:45.085 "get_zone_info": false, 00:15:45.085 "zone_management": false, 00:15:45.085 "zone_append": false, 00:15:45.085 "compare": false, 00:15:45.085 "compare_and_write": false, 00:15:45.085 "abort": false, 00:15:45.085 "seek_hole": false, 00:15:45.085 "seek_data": false, 00:15:45.085 "copy": false, 00:15:45.085 "nvme_iov_md": false 00:15:45.085 }, 00:15:45.085 "memory_domains": [ 00:15:45.085 { 00:15:45.085 "dma_device_id": "system", 00:15:45.085 "dma_device_type": 1 00:15:45.085 }, 00:15:45.085 { 00:15:45.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.085 "dma_device_type": 2 00:15:45.085 }, 00:15:45.085 { 00:15:45.085 "dma_device_id": "system", 00:15:45.085 "dma_device_type": 1 00:15:45.085 }, 00:15:45.085 { 00:15:45.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.085 "dma_device_type": 2 00:15:45.085 }, 00:15:45.085 { 00:15:45.085 "dma_device_id": "system", 00:15:45.085 "dma_device_type": 1 00:15:45.085 }, 00:15:45.085 { 00:15:45.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.085 "dma_device_type": 2 00:15:45.085 }, 00:15:45.085 { 00:15:45.085 "dma_device_id": "system", 00:15:45.085 "dma_device_type": 1 00:15:45.085 }, 00:15:45.085 { 00:15:45.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.085 "dma_device_type": 2 00:15:45.085 } 00:15:45.085 ], 00:15:45.085 "driver_specific": { 00:15:45.085 "raid": { 00:15:45.085 "uuid": "19b81204-4205-490a-8fc6-429d171bc38d", 00:15:45.085 "strip_size_kb": 64, 00:15:45.085 "state": "online", 00:15:45.085 "raid_level": "raid0", 00:15:45.085 "superblock": true, 00:15:45.085 "num_base_bdevs": 4, 00:15:45.085 "num_base_bdevs_discovered": 4, 00:15:45.085 "num_base_bdevs_operational": 4, 00:15:45.085 "base_bdevs_list": [ 00:15:45.085 { 00:15:45.085 "name": "pt1", 00:15:45.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.085 "is_configured": true, 00:15:45.085 "data_offset": 2048, 00:15:45.085 "data_size": 63488 00:15:45.085 }, 00:15:45.085 { 00:15:45.085 "name": "pt2", 00:15:45.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.085 "is_configured": true, 00:15:45.085 "data_offset": 2048, 00:15:45.085 "data_size": 63488 00:15:45.085 }, 00:15:45.085 { 00:15:45.085 "name": "pt3", 00:15:45.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.085 "is_configured": true, 00:15:45.085 "data_offset": 2048, 00:15:45.085 "data_size": 63488 00:15:45.085 }, 00:15:45.085 { 00:15:45.085 "name": "pt4", 00:15:45.085 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:45.085 "is_configured": true, 00:15:45.085 "data_offset": 2048, 00:15:45.085 "data_size": 63488 00:15:45.085 } 00:15:45.085 ] 00:15:45.085 } 00:15:45.085 } 00:15:45.085 }' 00:15:45.085 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.343 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:45.343 pt2 00:15:45.343 pt3 00:15:45.343 pt4' 00:15:45.343 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:45.343 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:45.343 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:45.343 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:45.343 "name": "pt1", 00:15:45.343 "aliases": [ 00:15:45.343 "00000000-0000-0000-0000-000000000001" 00:15:45.343 ], 00:15:45.343 "product_name": "passthru", 00:15:45.343 "block_size": 512, 00:15:45.343 "num_blocks": 65536, 00:15:45.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.343 "assigned_rate_limits": { 00:15:45.343 "rw_ios_per_sec": 0, 00:15:45.343 "rw_mbytes_per_sec": 0, 00:15:45.343 "r_mbytes_per_sec": 0, 00:15:45.343 "w_mbytes_per_sec": 0 00:15:45.343 }, 00:15:45.343 "claimed": true, 00:15:45.343 "claim_type": "exclusive_write", 00:15:45.343 "zoned": false, 00:15:45.343 "supported_io_types": { 00:15:45.344 "read": true, 00:15:45.344 "write": true, 00:15:45.344 "unmap": true, 00:15:45.344 "flush": true, 00:15:45.344 "reset": true, 00:15:45.344 "nvme_admin": false, 00:15:45.344 "nvme_io": false, 00:15:45.344 "nvme_io_md": false, 00:15:45.344 "write_zeroes": true, 00:15:45.344 "zcopy": true, 00:15:45.344 "get_zone_info": false, 00:15:45.344 "zone_management": false, 00:15:45.344 "zone_append": false, 00:15:45.344 "compare": false, 00:15:45.344 "compare_and_write": false, 00:15:45.344 "abort": true, 00:15:45.344 "seek_hole": false, 00:15:45.344 "seek_data": false, 00:15:45.344 "copy": true, 00:15:45.344 "nvme_iov_md": false 00:15:45.344 }, 00:15:45.344 "memory_domains": [ 00:15:45.344 { 00:15:45.344 "dma_device_id": "system", 00:15:45.344 "dma_device_type": 1 00:15:45.344 }, 00:15:45.344 { 00:15:45.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.344 "dma_device_type": 2 00:15:45.344 } 00:15:45.344 ], 00:15:45.344 "driver_specific": { 00:15:45.344 "passthru": { 00:15:45.344 "name": "pt1", 00:15:45.344 "base_bdev_name": "malloc1" 00:15:45.344 } 00:15:45.344 } 00:15:45.344 }' 00:15:45.344 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.344 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.602 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:45.603 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.603 18:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.603 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:45.603 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:45.603 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:45.603 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:45.603 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:45.603 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:45.603 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:45.603 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:45.603 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:45.603 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:45.862 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:45.862 "name": "pt2", 00:15:45.862 "aliases": [ 00:15:45.862 "00000000-0000-0000-0000-000000000002" 00:15:45.862 ], 00:15:45.862 "product_name": "passthru", 00:15:45.862 "block_size": 512, 00:15:45.862 "num_blocks": 65536, 00:15:45.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.862 "assigned_rate_limits": { 00:15:45.862 "rw_ios_per_sec": 0, 00:15:45.862 "rw_mbytes_per_sec": 0, 00:15:45.862 "r_mbytes_per_sec": 0, 00:15:45.862 "w_mbytes_per_sec": 0 00:15:45.862 }, 00:15:45.862 "claimed": true, 00:15:45.862 "claim_type": "exclusive_write", 00:15:45.862 "zoned": false, 00:15:45.862 "supported_io_types": { 00:15:45.862 "read": true, 00:15:45.862 "write": true, 00:15:45.862 "unmap": true, 00:15:45.862 "flush": true, 00:15:45.862 "reset": true, 00:15:45.862 "nvme_admin": false, 00:15:45.862 "nvme_io": false, 00:15:45.862 "nvme_io_md": false, 00:15:45.862 "write_zeroes": true, 00:15:45.862 "zcopy": true, 00:15:45.862 "get_zone_info": false, 00:15:45.862 "zone_management": false, 00:15:45.862 "zone_append": false, 00:15:45.862 "compare": false, 00:15:45.862 "compare_and_write": false, 00:15:45.862 "abort": true, 00:15:45.862 "seek_hole": false, 00:15:45.862 "seek_data": false, 00:15:45.862 "copy": true, 00:15:45.862 "nvme_iov_md": false 00:15:45.862 }, 00:15:45.862 "memory_domains": [ 00:15:45.862 { 00:15:45.862 "dma_device_id": "system", 00:15:45.862 "dma_device_type": 1 00:15:45.862 }, 00:15:45.862 { 00:15:45.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.862 "dma_device_type": 2 00:15:45.862 } 00:15:45.862 ], 00:15:45.862 "driver_specific": { 00:15:45.862 "passthru": { 00:15:45.862 "name": "pt2", 00:15:45.862 "base_bdev_name": "malloc2" 00:15:45.862 } 00:15:45.862 } 00:15:45.862 }' 00:15:45.862 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.862 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.862 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:45.862 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.121 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.121 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:46.121 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.121 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.121 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:46.121 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.121 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.121 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:46.122 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:46.122 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:46.122 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:46.381 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:46.381 "name": "pt3", 00:15:46.381 "aliases": [ 00:15:46.381 "00000000-0000-0000-0000-000000000003" 00:15:46.381 ], 00:15:46.381 "product_name": "passthru", 00:15:46.381 "block_size": 512, 00:15:46.381 "num_blocks": 65536, 00:15:46.381 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.381 "assigned_rate_limits": { 00:15:46.381 "rw_ios_per_sec": 0, 00:15:46.381 "rw_mbytes_per_sec": 0, 00:15:46.381 "r_mbytes_per_sec": 0, 00:15:46.381 "w_mbytes_per_sec": 0 00:15:46.381 }, 00:15:46.381 "claimed": true, 00:15:46.381 "claim_type": "exclusive_write", 00:15:46.381 "zoned": false, 00:15:46.381 "supported_io_types": { 00:15:46.381 "read": true, 00:15:46.381 "write": true, 00:15:46.381 "unmap": true, 00:15:46.381 "flush": true, 00:15:46.381 "reset": true, 00:15:46.381 "nvme_admin": false, 00:15:46.381 "nvme_io": false, 00:15:46.381 "nvme_io_md": false, 00:15:46.381 "write_zeroes": true, 00:15:46.381 "zcopy": true, 00:15:46.381 "get_zone_info": false, 00:15:46.381 "zone_management": false, 00:15:46.381 "zone_append": false, 00:15:46.381 "compare": false, 00:15:46.381 "compare_and_write": false, 00:15:46.381 "abort": true, 00:15:46.381 "seek_hole": false, 00:15:46.381 "seek_data": false, 00:15:46.381 "copy": true, 00:15:46.381 "nvme_iov_md": false 00:15:46.381 }, 00:15:46.381 "memory_domains": [ 00:15:46.381 { 00:15:46.381 "dma_device_id": "system", 00:15:46.381 "dma_device_type": 1 00:15:46.381 }, 00:15:46.381 { 00:15:46.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.381 "dma_device_type": 2 00:15:46.381 } 00:15:46.381 ], 00:15:46.381 "driver_specific": { 00:15:46.381 "passthru": { 00:15:46.381 "name": "pt3", 00:15:46.381 "base_bdev_name": "malloc3" 00:15:46.381 } 00:15:46.381 } 00:15:46.381 }' 00:15:46.381 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.381 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.381 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:46.381 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.381 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.640 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:46.640 18:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.640 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.640 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:46.640 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.640 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.640 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:46.640 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:46.640 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:46.640 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:46.899 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:46.899 "name": "pt4", 00:15:46.899 "aliases": [ 00:15:46.900 "00000000-0000-0000-0000-000000000004" 00:15:46.900 ], 00:15:46.900 "product_name": "passthru", 00:15:46.900 "block_size": 512, 00:15:46.900 "num_blocks": 65536, 00:15:46.900 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:46.900 "assigned_rate_limits": { 00:15:46.900 "rw_ios_per_sec": 0, 00:15:46.900 "rw_mbytes_per_sec": 0, 00:15:46.900 "r_mbytes_per_sec": 0, 00:15:46.900 "w_mbytes_per_sec": 0 00:15:46.900 }, 00:15:46.900 "claimed": true, 00:15:46.900 "claim_type": "exclusive_write", 00:15:46.900 "zoned": false, 00:15:46.900 "supported_io_types": { 00:15:46.900 "read": true, 00:15:46.900 "write": true, 00:15:46.900 "unmap": true, 00:15:46.900 "flush": true, 00:15:46.900 "reset": true, 00:15:46.900 "nvme_admin": false, 00:15:46.900 "nvme_io": false, 00:15:46.900 "nvme_io_md": false, 00:15:46.900 "write_zeroes": true, 00:15:46.900 "zcopy": true, 00:15:46.900 "get_zone_info": false, 00:15:46.900 "zone_management": false, 00:15:46.900 "zone_append": false, 00:15:46.900 "compare": false, 00:15:46.900 "compare_and_write": false, 00:15:46.900 "abort": true, 00:15:46.900 "seek_hole": false, 00:15:46.900 "seek_data": false, 00:15:46.900 "copy": true, 00:15:46.900 "nvme_iov_md": false 00:15:46.900 }, 00:15:46.900 "memory_domains": [ 00:15:46.900 { 00:15:46.900 "dma_device_id": "system", 00:15:46.900 "dma_device_type": 1 00:15:46.900 }, 00:15:46.900 { 00:15:46.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.900 "dma_device_type": 2 00:15:46.900 } 00:15:46.900 ], 00:15:46.900 "driver_specific": { 00:15:46.900 "passthru": { 00:15:46.900 "name": "pt4", 00:15:46.900 "base_bdev_name": "malloc4" 00:15:46.900 } 00:15:46.900 } 00:15:46.900 }' 00:15:46.900 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.900 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.900 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:46.900 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.900 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.900 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:46.900 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.900 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.158 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:47.158 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.158 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.158 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:47.158 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:47.158 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:47.417 [2024-07-24 18:18:55.768752] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.417 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=19b81204-4205-490a-8fc6-429d171bc38d 00:15:47.417 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 19b81204-4205-490a-8fc6-429d171bc38d ']' 00:15:47.417 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:47.417 [2024-07-24 18:18:55.941005] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.417 [2024-07-24 18:18:55.941015] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.417 [2024-07-24 18:18:55.941051] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.417 [2024-07-24 18:18:55.941091] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.417 [2024-07-24 18:18:55.941098] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b8c70 name raid_bdev1, state offline 00:15:47.417 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.417 18:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:47.676 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:47.676 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:47.676 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.676 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:47.935 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.935 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:47.935 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.935 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:48.195 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:48.195 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:48.195 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:48.195 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:48.454 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:48.454 18:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:48.454 18:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:48.454 18:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:48.454 18:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:48.454 18:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.454 18:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:48.454 18:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.454 18:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:48.454 18:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.454 18:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:48.454 18:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:48.454 18:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:48.713 [2024-07-24 18:18:57.079909] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:48.713 [2024-07-24 18:18:57.080877] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:48.713 [2024-07-24 18:18:57.080906] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:48.713 [2024-07-24 18:18:57.080927] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:48.713 [2024-07-24 18:18:57.080960] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:48.713 [2024-07-24 18:18:57.080989] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:48.713 [2024-07-24 18:18:57.081003] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:48.713 [2024-07-24 18:18:57.081016] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:48.713 [2024-07-24 18:18:57.081028] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.713 [2024-07-24 18:18:57.081034] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1264730 name raid_bdev1, state configuring 00:15:48.713 request: 00:15:48.713 { 00:15:48.713 "name": "raid_bdev1", 00:15:48.713 "raid_level": "raid0", 00:15:48.713 "base_bdevs": [ 00:15:48.713 "malloc1", 00:15:48.713 "malloc2", 00:15:48.713 "malloc3", 00:15:48.713 "malloc4" 00:15:48.713 ], 00:15:48.713 "strip_size_kb": 64, 00:15:48.713 "superblock": false, 00:15:48.713 "method": "bdev_raid_create", 00:15:48.713 "req_id": 1 00:15:48.713 } 00:15:48.713 Got JSON-RPC error response 00:15:48.713 response: 00:15:48.713 { 00:15:48.713 "code": -17, 00:15:48.713 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:48.713 } 00:15:48.714 18:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:48.714 18:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:48.714 18:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:48.714 18:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:48.714 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.714 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:48.714 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:48.714 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:48.714 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:48.973 [2024-07-24 18:18:57.416754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:48.973 [2024-07-24 18:18:57.416781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.973 [2024-07-24 18:18:57.416796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10c0ee0 00:15:48.973 [2024-07-24 18:18:57.416804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.973 [2024-07-24 18:18:57.417883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.973 [2024-07-24 18:18:57.417911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:48.973 [2024-07-24 18:18:57.417951] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:48.973 [2024-07-24 18:18:57.417969] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:48.973 pt1 00:15:48.973 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:48.973 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:48.973 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:48.973 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:48.973 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:48.973 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:48.973 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.973 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.973 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.973 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.973 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.973 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.232 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:49.232 "name": "raid_bdev1", 00:15:49.232 "uuid": "19b81204-4205-490a-8fc6-429d171bc38d", 00:15:49.232 "strip_size_kb": 64, 00:15:49.232 "state": "configuring", 00:15:49.232 "raid_level": "raid0", 00:15:49.232 "superblock": true, 00:15:49.232 "num_base_bdevs": 4, 00:15:49.232 "num_base_bdevs_discovered": 1, 00:15:49.232 "num_base_bdevs_operational": 4, 00:15:49.232 "base_bdevs_list": [ 00:15:49.232 { 00:15:49.232 "name": "pt1", 00:15:49.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.232 "is_configured": true, 00:15:49.232 "data_offset": 2048, 00:15:49.232 "data_size": 63488 00:15:49.232 }, 00:15:49.232 { 00:15:49.232 "name": null, 00:15:49.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.232 "is_configured": false, 00:15:49.232 "data_offset": 2048, 00:15:49.232 "data_size": 63488 00:15:49.232 }, 00:15:49.232 { 00:15:49.232 "name": null, 00:15:49.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.232 "is_configured": false, 00:15:49.232 "data_offset": 2048, 00:15:49.232 "data_size": 63488 00:15:49.232 }, 00:15:49.232 { 00:15:49.232 "name": null, 00:15:49.232 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:49.232 "is_configured": false, 00:15:49.232 "data_offset": 2048, 00:15:49.232 "data_size": 63488 00:15:49.232 } 00:15:49.232 ] 00:15:49.232 }' 00:15:49.232 18:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:49.232 18:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.491 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:15:49.491 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.750 [2024-07-24 18:18:58.234856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.750 [2024-07-24 18:18:58.234892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.750 [2024-07-24 18:18:58.234907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x125a080 00:15:49.750 [2024-07-24 18:18:58.234915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.750 [2024-07-24 18:18:58.235153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.750 [2024-07-24 18:18:58.235166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.750 [2024-07-24 18:18:58.235207] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:49.750 [2024-07-24 18:18:58.235220] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.750 pt2 00:15:49.750 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:50.009 [2024-07-24 18:18:58.387257] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.009 "name": "raid_bdev1", 00:15:50.009 "uuid": "19b81204-4205-490a-8fc6-429d171bc38d", 00:15:50.009 "strip_size_kb": 64, 00:15:50.009 "state": "configuring", 00:15:50.009 "raid_level": "raid0", 00:15:50.009 "superblock": true, 00:15:50.009 "num_base_bdevs": 4, 00:15:50.009 "num_base_bdevs_discovered": 1, 00:15:50.009 "num_base_bdevs_operational": 4, 00:15:50.009 "base_bdevs_list": [ 00:15:50.009 { 00:15:50.009 "name": "pt1", 00:15:50.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.009 "is_configured": true, 00:15:50.009 "data_offset": 2048, 00:15:50.009 "data_size": 63488 00:15:50.009 }, 00:15:50.009 { 00:15:50.009 "name": null, 00:15:50.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.009 "is_configured": false, 00:15:50.009 "data_offset": 2048, 00:15:50.009 "data_size": 63488 00:15:50.009 }, 00:15:50.009 { 00:15:50.009 "name": null, 00:15:50.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.009 "is_configured": false, 00:15:50.009 "data_offset": 2048, 00:15:50.009 "data_size": 63488 00:15:50.009 }, 00:15:50.009 { 00:15:50.009 "name": null, 00:15:50.009 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.009 "is_configured": false, 00:15:50.009 "data_offset": 2048, 00:15:50.009 "data_size": 63488 00:15:50.009 } 00:15:50.009 ] 00:15:50.009 }' 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.009 18:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.576 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:50.576 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:50.576 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.835 [2024-07-24 18:18:59.205353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.835 [2024-07-24 18:18:59.205389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.835 [2024-07-24 18:18:59.205400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10b77a0 00:15:50.835 [2024-07-24 18:18:59.205408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.835 [2024-07-24 18:18:59.205650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.835 [2024-07-24 18:18:59.205662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.835 [2024-07-24 18:18:59.205703] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:50.835 [2024-07-24 18:18:59.205715] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.835 pt2 00:15:50.835 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:50.835 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:50.835 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:50.835 [2024-07-24 18:18:59.377794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:50.835 [2024-07-24 18:18:59.377814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.835 [2024-07-24 18:18:59.377826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10ba010 00:15:50.835 [2024-07-24 18:18:59.377834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.835 [2024-07-24 18:18:59.378006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.835 [2024-07-24 18:18:59.378017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:50.835 [2024-07-24 18:18:59.378047] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:50.835 [2024-07-24 18:18:59.378057] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:50.835 pt3 00:15:50.835 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:50.835 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:50.835 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:51.094 [2024-07-24 18:18:59.546228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:51.094 [2024-07-24 18:18:59.546252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.094 [2024-07-24 18:18:59.546261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10bb2c0 00:15:51.094 [2024-07-24 18:18:59.546268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.094 [2024-07-24 18:18:59.546452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.094 [2024-07-24 18:18:59.546464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:51.094 [2024-07-24 18:18:59.546494] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:51.094 [2024-07-24 18:18:59.546505] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:51.094 [2024-07-24 18:18:59.546578] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x10b7ec0 00:15:51.094 [2024-07-24 18:18:59.546585] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:51.094 [2024-07-24 18:18:59.546699] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10bd4b0 00:15:51.094 [2024-07-24 18:18:59.546783] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10b7ec0 00:15:51.094 [2024-07-24 18:18:59.546789] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10b7ec0 00:15:51.094 [2024-07-24 18:18:59.546852] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.094 pt4 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.094 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.353 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.353 "name": "raid_bdev1", 00:15:51.353 "uuid": "19b81204-4205-490a-8fc6-429d171bc38d", 00:15:51.353 "strip_size_kb": 64, 00:15:51.353 "state": "online", 00:15:51.353 "raid_level": "raid0", 00:15:51.353 "superblock": true, 00:15:51.353 "num_base_bdevs": 4, 00:15:51.353 "num_base_bdevs_discovered": 4, 00:15:51.353 "num_base_bdevs_operational": 4, 00:15:51.353 "base_bdevs_list": [ 00:15:51.353 { 00:15:51.353 "name": "pt1", 00:15:51.353 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.353 "is_configured": true, 00:15:51.353 "data_offset": 2048, 00:15:51.353 "data_size": 63488 00:15:51.353 }, 00:15:51.353 { 00:15:51.353 "name": "pt2", 00:15:51.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.353 "is_configured": true, 00:15:51.353 "data_offset": 2048, 00:15:51.353 "data_size": 63488 00:15:51.353 }, 00:15:51.353 { 00:15:51.353 "name": "pt3", 00:15:51.353 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.353 "is_configured": true, 00:15:51.353 "data_offset": 2048, 00:15:51.353 "data_size": 63488 00:15:51.353 }, 00:15:51.353 { 00:15:51.353 "name": "pt4", 00:15:51.353 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.353 "is_configured": true, 00:15:51.353 "data_offset": 2048, 00:15:51.353 "data_size": 63488 00:15:51.353 } 00:15:51.353 ] 00:15:51.353 }' 00:15:51.353 18:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.353 18:18:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.920 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:51.920 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:51.920 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:51.920 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:51.920 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:51.920 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:51.920 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:51.920 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:51.920 [2024-07-24 18:19:00.404653] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.920 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:51.920 "name": "raid_bdev1", 00:15:51.920 "aliases": [ 00:15:51.920 "19b81204-4205-490a-8fc6-429d171bc38d" 00:15:51.920 ], 00:15:51.920 "product_name": "Raid Volume", 00:15:51.920 "block_size": 512, 00:15:51.920 "num_blocks": 253952, 00:15:51.920 "uuid": "19b81204-4205-490a-8fc6-429d171bc38d", 00:15:51.920 "assigned_rate_limits": { 00:15:51.920 "rw_ios_per_sec": 0, 00:15:51.920 "rw_mbytes_per_sec": 0, 00:15:51.920 "r_mbytes_per_sec": 0, 00:15:51.920 "w_mbytes_per_sec": 0 00:15:51.920 }, 00:15:51.920 "claimed": false, 00:15:51.920 "zoned": false, 00:15:51.920 "supported_io_types": { 00:15:51.920 "read": true, 00:15:51.920 "write": true, 00:15:51.920 "unmap": true, 00:15:51.920 "flush": true, 00:15:51.920 "reset": true, 00:15:51.920 "nvme_admin": false, 00:15:51.920 "nvme_io": false, 00:15:51.920 "nvme_io_md": false, 00:15:51.920 "write_zeroes": true, 00:15:51.920 "zcopy": false, 00:15:51.920 "get_zone_info": false, 00:15:51.920 "zone_management": false, 00:15:51.920 "zone_append": false, 00:15:51.920 "compare": false, 00:15:51.920 "compare_and_write": false, 00:15:51.920 "abort": false, 00:15:51.920 "seek_hole": false, 00:15:51.920 "seek_data": false, 00:15:51.920 "copy": false, 00:15:51.920 "nvme_iov_md": false 00:15:51.920 }, 00:15:51.920 "memory_domains": [ 00:15:51.920 { 00:15:51.920 "dma_device_id": "system", 00:15:51.920 "dma_device_type": 1 00:15:51.920 }, 00:15:51.920 { 00:15:51.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.920 "dma_device_type": 2 00:15:51.920 }, 00:15:51.920 { 00:15:51.920 "dma_device_id": "system", 00:15:51.920 "dma_device_type": 1 00:15:51.920 }, 00:15:51.920 { 00:15:51.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.920 "dma_device_type": 2 00:15:51.920 }, 00:15:51.920 { 00:15:51.920 "dma_device_id": "system", 00:15:51.920 "dma_device_type": 1 00:15:51.920 }, 00:15:51.920 { 00:15:51.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.920 "dma_device_type": 2 00:15:51.920 }, 00:15:51.920 { 00:15:51.920 "dma_device_id": "system", 00:15:51.920 "dma_device_type": 1 00:15:51.920 }, 00:15:51.920 { 00:15:51.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.920 "dma_device_type": 2 00:15:51.920 } 00:15:51.920 ], 00:15:51.920 "driver_specific": { 00:15:51.920 "raid": { 00:15:51.920 "uuid": "19b81204-4205-490a-8fc6-429d171bc38d", 00:15:51.920 "strip_size_kb": 64, 00:15:51.920 "state": "online", 00:15:51.920 "raid_level": "raid0", 00:15:51.920 "superblock": true, 00:15:51.920 "num_base_bdevs": 4, 00:15:51.920 "num_base_bdevs_discovered": 4, 00:15:51.920 "num_base_bdevs_operational": 4, 00:15:51.920 "base_bdevs_list": [ 00:15:51.920 { 00:15:51.920 "name": "pt1", 00:15:51.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.920 "is_configured": true, 00:15:51.920 "data_offset": 2048, 00:15:51.920 "data_size": 63488 00:15:51.920 }, 00:15:51.920 { 00:15:51.920 "name": "pt2", 00:15:51.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.921 "is_configured": true, 00:15:51.921 "data_offset": 2048, 00:15:51.921 "data_size": 63488 00:15:51.921 }, 00:15:51.921 { 00:15:51.921 "name": "pt3", 00:15:51.921 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.921 "is_configured": true, 00:15:51.921 "data_offset": 2048, 00:15:51.921 "data_size": 63488 00:15:51.921 }, 00:15:51.921 { 00:15:51.921 "name": "pt4", 00:15:51.921 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.921 "is_configured": true, 00:15:51.921 "data_offset": 2048, 00:15:51.921 "data_size": 63488 00:15:51.921 } 00:15:51.921 ] 00:15:51.921 } 00:15:51.921 } 00:15:51.921 }' 00:15:51.921 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.921 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:51.921 pt2 00:15:51.921 pt3 00:15:51.921 pt4' 00:15:51.921 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.921 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:51.921 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:52.179 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:52.179 "name": "pt1", 00:15:52.179 "aliases": [ 00:15:52.179 "00000000-0000-0000-0000-000000000001" 00:15:52.179 ], 00:15:52.179 "product_name": "passthru", 00:15:52.179 "block_size": 512, 00:15:52.179 "num_blocks": 65536, 00:15:52.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.179 "assigned_rate_limits": { 00:15:52.179 "rw_ios_per_sec": 0, 00:15:52.179 "rw_mbytes_per_sec": 0, 00:15:52.179 "r_mbytes_per_sec": 0, 00:15:52.179 "w_mbytes_per_sec": 0 00:15:52.179 }, 00:15:52.179 "claimed": true, 00:15:52.179 "claim_type": "exclusive_write", 00:15:52.179 "zoned": false, 00:15:52.179 "supported_io_types": { 00:15:52.179 "read": true, 00:15:52.179 "write": true, 00:15:52.179 "unmap": true, 00:15:52.179 "flush": true, 00:15:52.179 "reset": true, 00:15:52.179 "nvme_admin": false, 00:15:52.179 "nvme_io": false, 00:15:52.179 "nvme_io_md": false, 00:15:52.179 "write_zeroes": true, 00:15:52.179 "zcopy": true, 00:15:52.179 "get_zone_info": false, 00:15:52.179 "zone_management": false, 00:15:52.179 "zone_append": false, 00:15:52.179 "compare": false, 00:15:52.179 "compare_and_write": false, 00:15:52.179 "abort": true, 00:15:52.179 "seek_hole": false, 00:15:52.179 "seek_data": false, 00:15:52.179 "copy": true, 00:15:52.179 "nvme_iov_md": false 00:15:52.179 }, 00:15:52.179 "memory_domains": [ 00:15:52.179 { 00:15:52.179 "dma_device_id": "system", 00:15:52.179 "dma_device_type": 1 00:15:52.179 }, 00:15:52.179 { 00:15:52.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.179 "dma_device_type": 2 00:15:52.179 } 00:15:52.179 ], 00:15:52.179 "driver_specific": { 00:15:52.179 "passthru": { 00:15:52.179 "name": "pt1", 00:15:52.179 "base_bdev_name": "malloc1" 00:15:52.179 } 00:15:52.179 } 00:15:52.179 }' 00:15:52.179 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.179 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.179 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:52.179 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.179 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.437 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:52.437 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.437 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.437 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:52.437 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.437 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.437 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:52.437 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:52.437 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:52.437 18:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:52.696 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:52.696 "name": "pt2", 00:15:52.696 "aliases": [ 00:15:52.696 "00000000-0000-0000-0000-000000000002" 00:15:52.696 ], 00:15:52.696 "product_name": "passthru", 00:15:52.696 "block_size": 512, 00:15:52.696 "num_blocks": 65536, 00:15:52.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.696 "assigned_rate_limits": { 00:15:52.696 "rw_ios_per_sec": 0, 00:15:52.696 "rw_mbytes_per_sec": 0, 00:15:52.696 "r_mbytes_per_sec": 0, 00:15:52.696 "w_mbytes_per_sec": 0 00:15:52.696 }, 00:15:52.696 "claimed": true, 00:15:52.696 "claim_type": "exclusive_write", 00:15:52.696 "zoned": false, 00:15:52.696 "supported_io_types": { 00:15:52.696 "read": true, 00:15:52.696 "write": true, 00:15:52.696 "unmap": true, 00:15:52.696 "flush": true, 00:15:52.696 "reset": true, 00:15:52.696 "nvme_admin": false, 00:15:52.696 "nvme_io": false, 00:15:52.696 "nvme_io_md": false, 00:15:52.696 "write_zeroes": true, 00:15:52.696 "zcopy": true, 00:15:52.696 "get_zone_info": false, 00:15:52.696 "zone_management": false, 00:15:52.696 "zone_append": false, 00:15:52.696 "compare": false, 00:15:52.696 "compare_and_write": false, 00:15:52.696 "abort": true, 00:15:52.697 "seek_hole": false, 00:15:52.697 "seek_data": false, 00:15:52.697 "copy": true, 00:15:52.697 "nvme_iov_md": false 00:15:52.697 }, 00:15:52.697 "memory_domains": [ 00:15:52.697 { 00:15:52.697 "dma_device_id": "system", 00:15:52.697 "dma_device_type": 1 00:15:52.697 }, 00:15:52.697 { 00:15:52.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.697 "dma_device_type": 2 00:15:52.697 } 00:15:52.697 ], 00:15:52.697 "driver_specific": { 00:15:52.697 "passthru": { 00:15:52.697 "name": "pt2", 00:15:52.697 "base_bdev_name": "malloc2" 00:15:52.697 } 00:15:52.697 } 00:15:52.697 }' 00:15:52.697 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.697 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.697 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:52.697 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.697 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.956 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:52.956 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.956 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.956 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:52.956 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.956 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.956 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:52.956 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:52.956 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:52.956 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:53.215 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:53.215 "name": "pt3", 00:15:53.215 "aliases": [ 00:15:53.215 "00000000-0000-0000-0000-000000000003" 00:15:53.215 ], 00:15:53.215 "product_name": "passthru", 00:15:53.215 "block_size": 512, 00:15:53.215 "num_blocks": 65536, 00:15:53.215 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.215 "assigned_rate_limits": { 00:15:53.215 "rw_ios_per_sec": 0, 00:15:53.215 "rw_mbytes_per_sec": 0, 00:15:53.215 "r_mbytes_per_sec": 0, 00:15:53.215 "w_mbytes_per_sec": 0 00:15:53.215 }, 00:15:53.215 "claimed": true, 00:15:53.215 "claim_type": "exclusive_write", 00:15:53.215 "zoned": false, 00:15:53.215 "supported_io_types": { 00:15:53.215 "read": true, 00:15:53.215 "write": true, 00:15:53.215 "unmap": true, 00:15:53.215 "flush": true, 00:15:53.215 "reset": true, 00:15:53.215 "nvme_admin": false, 00:15:53.215 "nvme_io": false, 00:15:53.215 "nvme_io_md": false, 00:15:53.215 "write_zeroes": true, 00:15:53.215 "zcopy": true, 00:15:53.215 "get_zone_info": false, 00:15:53.215 "zone_management": false, 00:15:53.215 "zone_append": false, 00:15:53.215 "compare": false, 00:15:53.215 "compare_and_write": false, 00:15:53.215 "abort": true, 00:15:53.215 "seek_hole": false, 00:15:53.215 "seek_data": false, 00:15:53.215 "copy": true, 00:15:53.215 "nvme_iov_md": false 00:15:53.215 }, 00:15:53.215 "memory_domains": [ 00:15:53.215 { 00:15:53.215 "dma_device_id": "system", 00:15:53.215 "dma_device_type": 1 00:15:53.215 }, 00:15:53.215 { 00:15:53.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.215 "dma_device_type": 2 00:15:53.215 } 00:15:53.215 ], 00:15:53.215 "driver_specific": { 00:15:53.215 "passthru": { 00:15:53.215 "name": "pt3", 00:15:53.215 "base_bdev_name": "malloc3" 00:15:53.215 } 00:15:53.215 } 00:15:53.215 }' 00:15:53.215 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:53.215 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:53.215 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:53.215 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:53.215 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:53.215 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:53.215 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:53.474 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:53.474 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:53.474 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:53.474 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:53.474 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:53.474 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:53.474 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:53.474 18:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:53.733 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:53.733 "name": "pt4", 00:15:53.733 "aliases": [ 00:15:53.733 "00000000-0000-0000-0000-000000000004" 00:15:53.733 ], 00:15:53.733 "product_name": "passthru", 00:15:53.733 "block_size": 512, 00:15:53.733 "num_blocks": 65536, 00:15:53.733 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:53.733 "assigned_rate_limits": { 00:15:53.733 "rw_ios_per_sec": 0, 00:15:53.733 "rw_mbytes_per_sec": 0, 00:15:53.733 "r_mbytes_per_sec": 0, 00:15:53.733 "w_mbytes_per_sec": 0 00:15:53.733 }, 00:15:53.733 "claimed": true, 00:15:53.733 "claim_type": "exclusive_write", 00:15:53.733 "zoned": false, 00:15:53.733 "supported_io_types": { 00:15:53.733 "read": true, 00:15:53.733 "write": true, 00:15:53.733 "unmap": true, 00:15:53.733 "flush": true, 00:15:53.733 "reset": true, 00:15:53.733 "nvme_admin": false, 00:15:53.733 "nvme_io": false, 00:15:53.733 "nvme_io_md": false, 00:15:53.733 "write_zeroes": true, 00:15:53.733 "zcopy": true, 00:15:53.733 "get_zone_info": false, 00:15:53.733 "zone_management": false, 00:15:53.733 "zone_append": false, 00:15:53.733 "compare": false, 00:15:53.733 "compare_and_write": false, 00:15:53.733 "abort": true, 00:15:53.733 "seek_hole": false, 00:15:53.733 "seek_data": false, 00:15:53.733 "copy": true, 00:15:53.733 "nvme_iov_md": false 00:15:53.733 }, 00:15:53.733 "memory_domains": [ 00:15:53.733 { 00:15:53.733 "dma_device_id": "system", 00:15:53.733 "dma_device_type": 1 00:15:53.733 }, 00:15:53.733 { 00:15:53.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.733 "dma_device_type": 2 00:15:53.733 } 00:15:53.733 ], 00:15:53.733 "driver_specific": { 00:15:53.733 "passthru": { 00:15:53.733 "name": "pt4", 00:15:53.733 "base_bdev_name": "malloc4" 00:15:53.733 } 00:15:53.733 } 00:15:53.733 }' 00:15:53.733 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:53.733 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:53.733 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:53.733 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:53.733 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:53.733 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:53.733 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:53.733 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:53.992 [2024-07-24 18:19:02.550193] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 19b81204-4205-490a-8fc6-429d171bc38d '!=' 19b81204-4205-490a-8fc6-429d171bc38d ']' 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2221935 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2221935 ']' 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2221935 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.992 18:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2221935 00:15:54.251 18:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.251 18:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.251 18:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2221935' 00:15:54.251 killing process with pid 2221935 00:15:54.251 18:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2221935 00:15:54.251 [2024-07-24 18:19:02.624569] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.251 [2024-07-24 18:19:02.624615] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.251 [2024-07-24 18:19:02.624664] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.251 [2024-07-24 18:19:02.624672] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b7ec0 name raid_bdev1, state offline 00:15:54.251 18:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2221935 00:15:54.251 [2024-07-24 18:19:02.655612] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.251 18:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:54.251 00:15:54.251 real 0m12.388s 00:15:54.251 user 0m22.196s 00:15:54.251 sys 0m2.365s 00:15:54.251 18:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.251 18:19:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.251 ************************************ 00:15:54.251 END TEST raid_superblock_test 00:15:54.251 ************************************ 00:15:54.511 18:19:02 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:15:54.511 18:19:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:54.511 18:19:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.511 18:19:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.511 ************************************ 00:15:54.511 START TEST raid_read_error_test 00:15:54.511 ************************************ 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.8xRrBAYomx 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2224474 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2224474 /var/tmp/spdk-raid.sock 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2224474 ']' 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:54.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.511 18:19:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.511 [2024-07-24 18:19:02.980005] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:15:54.511 [2024-07-24 18:19:02.980049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224474 ] 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:01.0 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:01.1 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:01.2 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:01.3 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:01.4 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:01.5 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:01.6 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:01.7 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:02.0 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:02.1 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:02.2 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:02.3 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:02.4 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:02.5 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:02.6 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b3:02.7 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b5:01.0 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b5:01.1 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b5:01.2 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b5:01.3 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b5:01.4 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b5:01.5 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b5:01.6 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b5:01.7 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b5:02.0 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b5:02.1 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b5:02.2 cannot be used 00:15:54.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.511 EAL: Requested device 0000:b5:02.3 cannot be used 00:15:54.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.512 EAL: Requested device 0000:b5:02.4 cannot be used 00:15:54.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.512 EAL: Requested device 0000:b5:02.5 cannot be used 00:15:54.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.512 EAL: Requested device 0000:b5:02.6 cannot be used 00:15:54.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:54.512 EAL: Requested device 0000:b5:02.7 cannot be used 00:15:54.512 [2024-07-24 18:19:03.072344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.770 [2024-07-24 18:19:03.141500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.770 [2024-07-24 18:19:03.192207] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.770 [2024-07-24 18:19:03.192234] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.336 18:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.336 18:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:55.336 18:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:55.336 18:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:55.336 BaseBdev1_malloc 00:15:55.595 18:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:55.595 true 00:15:55.595 18:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:55.854 [2024-07-24 18:19:04.264714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:55.854 [2024-07-24 18:19:04.264751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.854 [2024-07-24 18:19:04.264764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17e7ed0 00:15:55.854 [2024-07-24 18:19:04.264772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.854 [2024-07-24 18:19:04.265909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.854 [2024-07-24 18:19:04.265933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:55.854 BaseBdev1 00:15:55.854 18:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:55.854 18:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:55.854 BaseBdev2_malloc 00:15:56.112 18:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:56.112 true 00:15:56.112 18:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:56.371 [2024-07-24 18:19:04.761511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:56.371 [2024-07-24 18:19:04.761542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.371 [2024-07-24 18:19:04.761553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17ecb60 00:15:56.371 [2024-07-24 18:19:04.761561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.371 [2024-07-24 18:19:04.762485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.371 [2024-07-24 18:19:04.762507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:56.371 BaseBdev2 00:15:56.371 18:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:56.371 18:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:56.371 BaseBdev3_malloc 00:15:56.371 18:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:56.630 true 00:15:56.630 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:56.923 [2024-07-24 18:19:05.266176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:56.923 [2024-07-24 18:19:05.266202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.923 [2024-07-24 18:19:05.266214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17edad0 00:15:56.923 [2024-07-24 18:19:05.266221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.923 [2024-07-24 18:19:05.267124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.923 [2024-07-24 18:19:05.267144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:56.923 BaseBdev3 00:15:56.923 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:56.923 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:56.923 BaseBdev4_malloc 00:15:56.923 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:57.182 true 00:15:57.182 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:57.182 [2024-07-24 18:19:05.766792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:57.182 [2024-07-24 18:19:05.766822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.182 [2024-07-24 18:19:05.766835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17efd40 00:15:57.182 [2024-07-24 18:19:05.766843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.182 [2024-07-24 18:19:05.767798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.182 [2024-07-24 18:19:05.767821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:57.182 BaseBdev4 00:15:57.440 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:57.440 [2024-07-24 18:19:05.927239] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.440 [2024-07-24 18:19:05.928044] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.440 [2024-07-24 18:19:05.928092] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.440 [2024-07-24 18:19:05.928130] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:57.440 [2024-07-24 18:19:05.928289] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x17f0b10 00:15:57.440 [2024-07-24 18:19:05.928297] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:57.440 [2024-07-24 18:19:05.928417] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17f1dc0 00:15:57.440 [2024-07-24 18:19:05.928514] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17f0b10 00:15:57.440 [2024-07-24 18:19:05.928520] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x17f0b10 00:15:57.440 [2024-07-24 18:19:05.928585] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.440 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:57.440 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:57.440 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:57.440 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:57.440 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:57.440 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:57.440 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.440 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.440 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.440 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.440 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.440 18:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.698 18:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:57.698 "name": "raid_bdev1", 00:15:57.698 "uuid": "8427ae97-c87e-41e1-85aa-b64a037acccb", 00:15:57.698 "strip_size_kb": 64, 00:15:57.698 "state": "online", 00:15:57.698 "raid_level": "raid0", 00:15:57.698 "superblock": true, 00:15:57.698 "num_base_bdevs": 4, 00:15:57.698 "num_base_bdevs_discovered": 4, 00:15:57.698 "num_base_bdevs_operational": 4, 00:15:57.698 "base_bdevs_list": [ 00:15:57.698 { 00:15:57.698 "name": "BaseBdev1", 00:15:57.698 "uuid": "83577475-5726-5fd8-b9c5-974cbd9d4641", 00:15:57.698 "is_configured": true, 00:15:57.698 "data_offset": 2048, 00:15:57.698 "data_size": 63488 00:15:57.698 }, 00:15:57.698 { 00:15:57.698 "name": "BaseBdev2", 00:15:57.698 "uuid": "9a56d0b0-f4c7-559e-a946-bd1adb693b52", 00:15:57.698 "is_configured": true, 00:15:57.699 "data_offset": 2048, 00:15:57.699 "data_size": 63488 00:15:57.699 }, 00:15:57.699 { 00:15:57.699 "name": "BaseBdev3", 00:15:57.699 "uuid": "5e59cf3c-338c-5df1-af8b-a8b01987dad0", 00:15:57.699 "is_configured": true, 00:15:57.699 "data_offset": 2048, 00:15:57.699 "data_size": 63488 00:15:57.699 }, 00:15:57.699 { 00:15:57.699 "name": "BaseBdev4", 00:15:57.699 "uuid": "378dbb66-e1d4-5039-8763-9183b4b54748", 00:15:57.699 "is_configured": true, 00:15:57.699 "data_offset": 2048, 00:15:57.699 "data_size": 63488 00:15:57.699 } 00:15:57.699 ] 00:15:57.699 }' 00:15:57.699 18:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:57.699 18:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.266 18:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:58.266 18:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:58.266 [2024-07-24 18:19:06.681401] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1643cb0 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.205 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.464 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.464 "name": "raid_bdev1", 00:15:59.464 "uuid": "8427ae97-c87e-41e1-85aa-b64a037acccb", 00:15:59.464 "strip_size_kb": 64, 00:15:59.464 "state": "online", 00:15:59.464 "raid_level": "raid0", 00:15:59.464 "superblock": true, 00:15:59.464 "num_base_bdevs": 4, 00:15:59.464 "num_base_bdevs_discovered": 4, 00:15:59.464 "num_base_bdevs_operational": 4, 00:15:59.464 "base_bdevs_list": [ 00:15:59.464 { 00:15:59.464 "name": "BaseBdev1", 00:15:59.464 "uuid": "83577475-5726-5fd8-b9c5-974cbd9d4641", 00:15:59.464 "is_configured": true, 00:15:59.464 "data_offset": 2048, 00:15:59.464 "data_size": 63488 00:15:59.464 }, 00:15:59.464 { 00:15:59.464 "name": "BaseBdev2", 00:15:59.464 "uuid": "9a56d0b0-f4c7-559e-a946-bd1adb693b52", 00:15:59.464 "is_configured": true, 00:15:59.464 "data_offset": 2048, 00:15:59.464 "data_size": 63488 00:15:59.464 }, 00:15:59.464 { 00:15:59.464 "name": "BaseBdev3", 00:15:59.464 "uuid": "5e59cf3c-338c-5df1-af8b-a8b01987dad0", 00:15:59.464 "is_configured": true, 00:15:59.464 "data_offset": 2048, 00:15:59.464 "data_size": 63488 00:15:59.464 }, 00:15:59.464 { 00:15:59.464 "name": "BaseBdev4", 00:15:59.464 "uuid": "378dbb66-e1d4-5039-8763-9183b4b54748", 00:15:59.464 "is_configured": true, 00:15:59.464 "data_offset": 2048, 00:15:59.464 "data_size": 63488 00:15:59.464 } 00:15:59.464 ] 00:15:59.464 }' 00:15:59.464 18:19:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.464 18:19:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.030 18:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:00.030 [2024-07-24 18:19:08.597721] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.030 [2024-07-24 18:19:08.597753] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.030 [2024-07-24 18:19:08.599728] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.030 [2024-07-24 18:19:08.599755] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.030 [2024-07-24 18:19:08.599781] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.030 [2024-07-24 18:19:08.599788] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17f0b10 name raid_bdev1, state offline 00:16:00.030 0 00:16:00.030 18:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2224474 00:16:00.030 18:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2224474 ']' 00:16:00.030 18:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2224474 00:16:00.030 18:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:16:00.030 18:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:00.030 18:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2224474 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2224474' 00:16:00.289 killing process with pid 2224474 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2224474 00:16:00.289 [2024-07-24 18:19:08.668608] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2224474 00:16:00.289 [2024-07-24 18:19:08.695036] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.8xRrBAYomx 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:16:00.289 00:16:00.289 real 0m5.975s 00:16:00.289 user 0m9.228s 00:16:00.289 sys 0m1.076s 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:00.289 18:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.289 ************************************ 00:16:00.289 END TEST raid_read_error_test 00:16:00.289 ************************************ 00:16:00.549 18:19:08 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:16:00.549 18:19:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:00.549 18:19:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:00.549 18:19:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:00.549 ************************************ 00:16:00.549 START TEST raid_write_error_test 00:16:00.549 ************************************ 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.7q8qwIdAsh 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2225565 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2225565 /var/tmp/spdk-raid.sock 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2225565 ']' 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:00.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:00.549 18:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.549 [2024-07-24 18:19:09.045167] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:16:00.550 [2024-07-24 18:19:09.045218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225565 ] 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:01.0 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:01.1 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:01.2 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:01.3 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:01.4 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:01.5 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:01.6 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:01.7 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:02.0 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:02.1 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:02.2 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:02.3 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:02.4 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:02.5 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:02.6 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b3:02.7 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:01.0 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:01.1 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:01.2 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:01.3 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:01.4 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:01.5 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:01.6 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:01.7 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:02.0 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:02.1 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:02.2 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:02.3 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:02.4 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:02.5 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:02.6 cannot be used 00:16:00.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:00.550 EAL: Requested device 0000:b5:02.7 cannot be used 00:16:00.550 [2024-07-24 18:19:09.139806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.809 [2024-07-24 18:19:09.214093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.809 [2024-07-24 18:19:09.275409] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.809 [2024-07-24 18:19:09.275435] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.377 18:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:01.377 18:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:01.377 18:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:01.377 18:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:01.644 BaseBdev1_malloc 00:16:01.644 18:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:01.644 true 00:16:01.644 18:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:01.904 [2024-07-24 18:19:10.320010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:01.904 [2024-07-24 18:19:10.320044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.904 [2024-07-24 18:19:10.320059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1512ed0 00:16:01.904 [2024-07-24 18:19:10.320068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.904 [2024-07-24 18:19:10.321277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.904 [2024-07-24 18:19:10.321301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:01.904 BaseBdev1 00:16:01.904 18:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:01.904 18:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:01.904 BaseBdev2_malloc 00:16:01.904 18:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:02.163 true 00:16:02.163 18:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:02.422 [2024-07-24 18:19:10.821014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:02.422 [2024-07-24 18:19:10.821047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.422 [2024-07-24 18:19:10.821061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1517b60 00:16:02.422 [2024-07-24 18:19:10.821069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.422 [2024-07-24 18:19:10.822100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.422 [2024-07-24 18:19:10.822123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:02.422 BaseBdev2 00:16:02.422 18:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:02.422 18:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:02.422 BaseBdev3_malloc 00:16:02.422 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:02.681 true 00:16:02.681 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:02.940 [2024-07-24 18:19:11.297854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:02.940 [2024-07-24 18:19:11.297884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.940 [2024-07-24 18:19:11.297898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1518ad0 00:16:02.940 [2024-07-24 18:19:11.297906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.940 [2024-07-24 18:19:11.298862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.940 [2024-07-24 18:19:11.298885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:02.940 BaseBdev3 00:16:02.940 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:02.940 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:02.940 BaseBdev4_malloc 00:16:02.940 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:03.199 true 00:16:03.199 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:03.458 [2024-07-24 18:19:11.806745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:03.458 [2024-07-24 18:19:11.806778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.458 [2024-07-24 18:19:11.806793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x151ad40 00:16:03.458 [2024-07-24 18:19:11.806802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.458 [2024-07-24 18:19:11.807819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.458 [2024-07-24 18:19:11.807842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:03.458 BaseBdev4 00:16:03.458 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:03.458 [2024-07-24 18:19:11.975219] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.458 [2024-07-24 18:19:11.976078] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.458 [2024-07-24 18:19:11.976122] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.458 [2024-07-24 18:19:11.976157] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:03.458 [2024-07-24 18:19:11.976307] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x151bb10 00:16:03.458 [2024-07-24 18:19:11.976314] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:03.458 [2024-07-24 18:19:11.976441] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x151cdc0 00:16:03.458 [2024-07-24 18:19:11.976537] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x151bb10 00:16:03.458 [2024-07-24 18:19:11.976543] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x151bb10 00:16:03.458 [2024-07-24 18:19:11.976606] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.458 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:03.458 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:03.458 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:03.458 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:03.458 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:03.458 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:03.458 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.458 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.458 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.458 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.458 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.458 18:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.717 18:19:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.717 "name": "raid_bdev1", 00:16:03.717 "uuid": "b360c5f6-c1eb-49ee-bfd6-278ceeac3297", 00:16:03.717 "strip_size_kb": 64, 00:16:03.717 "state": "online", 00:16:03.717 "raid_level": "raid0", 00:16:03.717 "superblock": true, 00:16:03.717 "num_base_bdevs": 4, 00:16:03.717 "num_base_bdevs_discovered": 4, 00:16:03.717 "num_base_bdevs_operational": 4, 00:16:03.717 "base_bdevs_list": [ 00:16:03.717 { 00:16:03.717 "name": "BaseBdev1", 00:16:03.717 "uuid": "e3bf5cf9-99df-533b-ab87-a37538d811f0", 00:16:03.717 "is_configured": true, 00:16:03.717 "data_offset": 2048, 00:16:03.717 "data_size": 63488 00:16:03.717 }, 00:16:03.717 { 00:16:03.717 "name": "BaseBdev2", 00:16:03.717 "uuid": "556fafbb-9ae2-579d-a4c8-c02f516b7aa6", 00:16:03.717 "is_configured": true, 00:16:03.717 "data_offset": 2048, 00:16:03.717 "data_size": 63488 00:16:03.717 }, 00:16:03.717 { 00:16:03.717 "name": "BaseBdev3", 00:16:03.717 "uuid": "a56b9284-2900-5bed-8a00-0f70dbcde35a", 00:16:03.717 "is_configured": true, 00:16:03.717 "data_offset": 2048, 00:16:03.717 "data_size": 63488 00:16:03.717 }, 00:16:03.717 { 00:16:03.717 "name": "BaseBdev4", 00:16:03.717 "uuid": "f09d2d4d-bfd7-599a-a59c-4638327ea9a4", 00:16:03.717 "is_configured": true, 00:16:03.717 "data_offset": 2048, 00:16:03.718 "data_size": 63488 00:16:03.718 } 00:16:03.718 ] 00:16:03.718 }' 00:16:03.718 18:19:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.718 18:19:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.286 18:19:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:04.286 18:19:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:04.286 [2024-07-24 18:19:12.709303] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x136ecb0 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.223 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.483 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.483 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.483 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:05.483 "name": "raid_bdev1", 00:16:05.483 "uuid": "b360c5f6-c1eb-49ee-bfd6-278ceeac3297", 00:16:05.483 "strip_size_kb": 64, 00:16:05.483 "state": "online", 00:16:05.483 "raid_level": "raid0", 00:16:05.483 "superblock": true, 00:16:05.483 "num_base_bdevs": 4, 00:16:05.483 "num_base_bdevs_discovered": 4, 00:16:05.483 "num_base_bdevs_operational": 4, 00:16:05.483 "base_bdevs_list": [ 00:16:05.483 { 00:16:05.483 "name": "BaseBdev1", 00:16:05.483 "uuid": "e3bf5cf9-99df-533b-ab87-a37538d811f0", 00:16:05.483 "is_configured": true, 00:16:05.483 "data_offset": 2048, 00:16:05.483 "data_size": 63488 00:16:05.483 }, 00:16:05.483 { 00:16:05.483 "name": "BaseBdev2", 00:16:05.483 "uuid": "556fafbb-9ae2-579d-a4c8-c02f516b7aa6", 00:16:05.483 "is_configured": true, 00:16:05.483 "data_offset": 2048, 00:16:05.483 "data_size": 63488 00:16:05.483 }, 00:16:05.483 { 00:16:05.483 "name": "BaseBdev3", 00:16:05.483 "uuid": "a56b9284-2900-5bed-8a00-0f70dbcde35a", 00:16:05.483 "is_configured": true, 00:16:05.483 "data_offset": 2048, 00:16:05.483 "data_size": 63488 00:16:05.483 }, 00:16:05.483 { 00:16:05.483 "name": "BaseBdev4", 00:16:05.483 "uuid": "f09d2d4d-bfd7-599a-a59c-4638327ea9a4", 00:16:05.483 "is_configured": true, 00:16:05.483 "data_offset": 2048, 00:16:05.483 "data_size": 63488 00:16:05.483 } 00:16:05.483 ] 00:16:05.483 }' 00:16:05.483 18:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:05.483 18:19:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.051 18:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:06.051 [2024-07-24 18:19:14.621488] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.051 [2024-07-24 18:19:14.621524] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.051 [2024-07-24 18:19:14.623493] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.051 [2024-07-24 18:19:14.623518] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.051 [2024-07-24 18:19:14.623543] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.051 [2024-07-24 18:19:14.623550] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x151bb10 name raid_bdev1, state offline 00:16:06.051 0 00:16:06.051 18:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2225565 00:16:06.051 18:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2225565 ']' 00:16:06.051 18:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2225565 00:16:06.051 18:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:06.310 18:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:06.310 18:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2225565 00:16:06.310 18:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:06.310 18:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:06.310 18:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2225565' 00:16:06.310 killing process with pid 2225565 00:16:06.310 18:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2225565 00:16:06.310 [2024-07-24 18:19:14.697510] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.310 18:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2225565 00:16:06.310 [2024-07-24 18:19:14.721800] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.310 18:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.7q8qwIdAsh 00:16:06.310 18:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:06.310 18:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:06.570 18:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:16:06.570 18:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:16:06.570 18:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:06.570 18:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:06.570 18:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:16:06.570 00:16:06.570 real 0m5.939s 00:16:06.570 user 0m9.137s 00:16:06.570 sys 0m1.076s 00:16:06.570 18:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:06.570 18:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.570 ************************************ 00:16:06.570 END TEST raid_write_error_test 00:16:06.570 ************************************ 00:16:06.570 18:19:14 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:06.570 18:19:14 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:16:06.570 18:19:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:06.570 18:19:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:06.570 18:19:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:06.570 ************************************ 00:16:06.570 START TEST raid_state_function_test 00:16:06.570 ************************************ 00:16:06.570 18:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:16:06.570 18:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:06.570 18:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:06.570 18:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:06.570 18:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:06.570 18:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:06.570 18:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:06.570 18:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:06.570 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:06.571 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:06.571 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:06.571 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:06.571 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2226648 00:16:06.571 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2226648' 00:16:06.571 Process raid pid: 2226648 00:16:06.571 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:06.571 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2226648 /var/tmp/spdk-raid.sock 00:16:06.571 18:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2226648 ']' 00:16:06.571 18:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:06.571 18:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:06.571 18:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:06.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:06.571 18:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:06.571 18:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.571 [2024-07-24 18:19:15.059905] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:16:06.571 [2024-07-24 18:19:15.059949] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:01.0 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:01.1 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:01.2 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:01.3 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:01.4 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:01.5 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:01.6 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:01.7 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:02.0 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:02.1 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:02.2 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:02.3 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:02.4 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:02.5 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:02.6 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b3:02.7 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:01.0 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:01.1 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:01.2 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:01.3 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:01.4 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:01.5 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:01.6 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:01.7 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:02.0 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:02.1 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:02.2 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:02.3 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:02.4 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:02.5 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:02.6 cannot be used 00:16:06.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.571 EAL: Requested device 0000:b5:02.7 cannot be used 00:16:06.571 [2024-07-24 18:19:15.153702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.830 [2024-07-24 18:19:15.228237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.830 [2024-07-24 18:19:15.281735] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.830 [2024-07-24 18:19:15.281761] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.399 18:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.399 18:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:07.399 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:07.399 [2024-07-24 18:19:15.984461] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:07.399 [2024-07-24 18:19:15.984488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:07.399 [2024-07-24 18:19:15.984495] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.399 [2024-07-24 18:19:15.984503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.399 [2024-07-24 18:19:15.984508] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:07.399 [2024-07-24 18:19:15.984515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:07.399 [2024-07-24 18:19:15.984521] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:07.399 [2024-07-24 18:19:15.984528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:07.659 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:07.659 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.659 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.659 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:07.659 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:07.659 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:07.659 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.659 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.659 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.659 18:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.659 18:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.659 18:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.659 18:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.659 "name": "Existed_Raid", 00:16:07.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.659 "strip_size_kb": 64, 00:16:07.659 "state": "configuring", 00:16:07.659 "raid_level": "concat", 00:16:07.659 "superblock": false, 00:16:07.659 "num_base_bdevs": 4, 00:16:07.659 "num_base_bdevs_discovered": 0, 00:16:07.659 "num_base_bdevs_operational": 4, 00:16:07.659 "base_bdevs_list": [ 00:16:07.659 { 00:16:07.659 "name": "BaseBdev1", 00:16:07.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.659 "is_configured": false, 00:16:07.659 "data_offset": 0, 00:16:07.659 "data_size": 0 00:16:07.659 }, 00:16:07.659 { 00:16:07.659 "name": "BaseBdev2", 00:16:07.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.659 "is_configured": false, 00:16:07.659 "data_offset": 0, 00:16:07.659 "data_size": 0 00:16:07.659 }, 00:16:07.659 { 00:16:07.659 "name": "BaseBdev3", 00:16:07.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.659 "is_configured": false, 00:16:07.659 "data_offset": 0, 00:16:07.659 "data_size": 0 00:16:07.659 }, 00:16:07.659 { 00:16:07.659 "name": "BaseBdev4", 00:16:07.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.659 "is_configured": false, 00:16:07.659 "data_offset": 0, 00:16:07.659 "data_size": 0 00:16:07.659 } 00:16:07.659 ] 00:16:07.659 }' 00:16:07.659 18:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.659 18:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.228 18:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:08.228 [2024-07-24 18:19:16.814510] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:08.228 [2024-07-24 18:19:16.814528] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20fd1e0 name Existed_Raid, state configuring 00:16:08.488 18:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:08.488 [2024-07-24 18:19:16.990977] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.488 [2024-07-24 18:19:16.990995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.488 [2024-07-24 18:19:16.991000] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.488 [2024-07-24 18:19:16.991007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.488 [2024-07-24 18:19:16.991012] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:08.488 [2024-07-24 18:19:16.991019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:08.488 [2024-07-24 18:19:16.991024] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:08.488 [2024-07-24 18:19:16.991031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:08.488 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.747 [2024-07-24 18:19:17.163766] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.747 BaseBdev1 00:16:08.747 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:08.747 18:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:08.747 18:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:08.747 18:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:08.747 18:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:08.747 18:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:08.747 18:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:09.006 18:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.006 [ 00:16:09.006 { 00:16:09.006 "name": "BaseBdev1", 00:16:09.006 "aliases": [ 00:16:09.006 "0b69d5d5-835a-4dbc-96e3-d55d8ba1a096" 00:16:09.006 ], 00:16:09.006 "product_name": "Malloc disk", 00:16:09.006 "block_size": 512, 00:16:09.006 "num_blocks": 65536, 00:16:09.006 "uuid": "0b69d5d5-835a-4dbc-96e3-d55d8ba1a096", 00:16:09.006 "assigned_rate_limits": { 00:16:09.006 "rw_ios_per_sec": 0, 00:16:09.006 "rw_mbytes_per_sec": 0, 00:16:09.006 "r_mbytes_per_sec": 0, 00:16:09.006 "w_mbytes_per_sec": 0 00:16:09.006 }, 00:16:09.006 "claimed": true, 00:16:09.006 "claim_type": "exclusive_write", 00:16:09.006 "zoned": false, 00:16:09.006 "supported_io_types": { 00:16:09.006 "read": true, 00:16:09.006 "write": true, 00:16:09.006 "unmap": true, 00:16:09.006 "flush": true, 00:16:09.006 "reset": true, 00:16:09.006 "nvme_admin": false, 00:16:09.006 "nvme_io": false, 00:16:09.007 "nvme_io_md": false, 00:16:09.007 "write_zeroes": true, 00:16:09.007 "zcopy": true, 00:16:09.007 "get_zone_info": false, 00:16:09.007 "zone_management": false, 00:16:09.007 "zone_append": false, 00:16:09.007 "compare": false, 00:16:09.007 "compare_and_write": false, 00:16:09.007 "abort": true, 00:16:09.007 "seek_hole": false, 00:16:09.007 "seek_data": false, 00:16:09.007 "copy": true, 00:16:09.007 "nvme_iov_md": false 00:16:09.007 }, 00:16:09.007 "memory_domains": [ 00:16:09.007 { 00:16:09.007 "dma_device_id": "system", 00:16:09.007 "dma_device_type": 1 00:16:09.007 }, 00:16:09.007 { 00:16:09.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.007 "dma_device_type": 2 00:16:09.007 } 00:16:09.007 ], 00:16:09.007 "driver_specific": {} 00:16:09.007 } 00:16:09.007 ] 00:16:09.007 18:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:09.007 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:09.007 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.007 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:09.007 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:09.007 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:09.007 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:09.007 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.007 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.007 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.007 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.007 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.007 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.266 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.266 "name": "Existed_Raid", 00:16:09.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.266 "strip_size_kb": 64, 00:16:09.266 "state": "configuring", 00:16:09.266 "raid_level": "concat", 00:16:09.266 "superblock": false, 00:16:09.266 "num_base_bdevs": 4, 00:16:09.266 "num_base_bdevs_discovered": 1, 00:16:09.266 "num_base_bdevs_operational": 4, 00:16:09.266 "base_bdevs_list": [ 00:16:09.266 { 00:16:09.266 "name": "BaseBdev1", 00:16:09.266 "uuid": "0b69d5d5-835a-4dbc-96e3-d55d8ba1a096", 00:16:09.266 "is_configured": true, 00:16:09.266 "data_offset": 0, 00:16:09.266 "data_size": 65536 00:16:09.266 }, 00:16:09.266 { 00:16:09.266 "name": "BaseBdev2", 00:16:09.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.266 "is_configured": false, 00:16:09.266 "data_offset": 0, 00:16:09.266 "data_size": 0 00:16:09.266 }, 00:16:09.266 { 00:16:09.266 "name": "BaseBdev3", 00:16:09.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.266 "is_configured": false, 00:16:09.266 "data_offset": 0, 00:16:09.266 "data_size": 0 00:16:09.266 }, 00:16:09.266 { 00:16:09.266 "name": "BaseBdev4", 00:16:09.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.266 "is_configured": false, 00:16:09.266 "data_offset": 0, 00:16:09.266 "data_size": 0 00:16:09.267 } 00:16:09.267 ] 00:16:09.267 }' 00:16:09.267 18:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.267 18:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.835 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:09.835 [2024-07-24 18:19:18.322750] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:09.835 [2024-07-24 18:19:18.322780] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20fca50 name Existed_Raid, state configuring 00:16:09.835 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:10.095 [2024-07-24 18:19:18.499227] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.095 [2024-07-24 18:19:18.500264] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:10.095 [2024-07-24 18:19:18.500289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:10.095 [2024-07-24 18:19:18.500295] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:10.095 [2024-07-24 18:19:18.500302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:10.095 [2024-07-24 18:19:18.500307] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:10.095 [2024-07-24 18:19:18.500331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:10.095 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:10.095 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:10.096 "name": "Existed_Raid", 00:16:10.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.096 "strip_size_kb": 64, 00:16:10.096 "state": "configuring", 00:16:10.096 "raid_level": "concat", 00:16:10.096 "superblock": false, 00:16:10.096 "num_base_bdevs": 4, 00:16:10.096 "num_base_bdevs_discovered": 1, 00:16:10.096 "num_base_bdevs_operational": 4, 00:16:10.096 "base_bdevs_list": [ 00:16:10.096 { 00:16:10.096 "name": "BaseBdev1", 00:16:10.096 "uuid": "0b69d5d5-835a-4dbc-96e3-d55d8ba1a096", 00:16:10.096 "is_configured": true, 00:16:10.096 "data_offset": 0, 00:16:10.096 "data_size": 65536 00:16:10.096 }, 00:16:10.096 { 00:16:10.096 "name": "BaseBdev2", 00:16:10.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.096 "is_configured": false, 00:16:10.096 "data_offset": 0, 00:16:10.096 "data_size": 0 00:16:10.096 }, 00:16:10.096 { 00:16:10.096 "name": "BaseBdev3", 00:16:10.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.096 "is_configured": false, 00:16:10.096 "data_offset": 0, 00:16:10.096 "data_size": 0 00:16:10.096 }, 00:16:10.096 { 00:16:10.096 "name": "BaseBdev4", 00:16:10.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.096 "is_configured": false, 00:16:10.096 "data_offset": 0, 00:16:10.096 "data_size": 0 00:16:10.096 } 00:16:10.096 ] 00:16:10.096 }' 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:10.096 18:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.663 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.922 [2024-07-24 18:19:19.344098] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.922 BaseBdev2 00:16:10.922 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:10.922 18:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:10.922 18:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:10.922 18:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:10.922 18:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:10.922 18:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:10.922 18:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:11.182 [ 00:16:11.182 { 00:16:11.182 "name": "BaseBdev2", 00:16:11.182 "aliases": [ 00:16:11.182 "ca10e3ea-e085-41ba-a0b4-0860cd84051a" 00:16:11.182 ], 00:16:11.182 "product_name": "Malloc disk", 00:16:11.182 "block_size": 512, 00:16:11.182 "num_blocks": 65536, 00:16:11.182 "uuid": "ca10e3ea-e085-41ba-a0b4-0860cd84051a", 00:16:11.182 "assigned_rate_limits": { 00:16:11.182 "rw_ios_per_sec": 0, 00:16:11.182 "rw_mbytes_per_sec": 0, 00:16:11.182 "r_mbytes_per_sec": 0, 00:16:11.182 "w_mbytes_per_sec": 0 00:16:11.182 }, 00:16:11.182 "claimed": true, 00:16:11.182 "claim_type": "exclusive_write", 00:16:11.182 "zoned": false, 00:16:11.182 "supported_io_types": { 00:16:11.182 "read": true, 00:16:11.182 "write": true, 00:16:11.182 "unmap": true, 00:16:11.182 "flush": true, 00:16:11.182 "reset": true, 00:16:11.182 "nvme_admin": false, 00:16:11.182 "nvme_io": false, 00:16:11.182 "nvme_io_md": false, 00:16:11.182 "write_zeroes": true, 00:16:11.182 "zcopy": true, 00:16:11.182 "get_zone_info": false, 00:16:11.182 "zone_management": false, 00:16:11.182 "zone_append": false, 00:16:11.182 "compare": false, 00:16:11.182 "compare_and_write": false, 00:16:11.182 "abort": true, 00:16:11.182 "seek_hole": false, 00:16:11.182 "seek_data": false, 00:16:11.182 "copy": true, 00:16:11.182 "nvme_iov_md": false 00:16:11.182 }, 00:16:11.182 "memory_domains": [ 00:16:11.182 { 00:16:11.182 "dma_device_id": "system", 00:16:11.182 "dma_device_type": 1 00:16:11.182 }, 00:16:11.182 { 00:16:11.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.182 "dma_device_type": 2 00:16:11.182 } 00:16:11.182 ], 00:16:11.182 "driver_specific": {} 00:16:11.182 } 00:16:11.182 ] 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.182 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.442 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.442 "name": "Existed_Raid", 00:16:11.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.442 "strip_size_kb": 64, 00:16:11.442 "state": "configuring", 00:16:11.442 "raid_level": "concat", 00:16:11.442 "superblock": false, 00:16:11.442 "num_base_bdevs": 4, 00:16:11.442 "num_base_bdevs_discovered": 2, 00:16:11.442 "num_base_bdevs_operational": 4, 00:16:11.442 "base_bdevs_list": [ 00:16:11.442 { 00:16:11.442 "name": "BaseBdev1", 00:16:11.442 "uuid": "0b69d5d5-835a-4dbc-96e3-d55d8ba1a096", 00:16:11.442 "is_configured": true, 00:16:11.442 "data_offset": 0, 00:16:11.442 "data_size": 65536 00:16:11.442 }, 00:16:11.442 { 00:16:11.442 "name": "BaseBdev2", 00:16:11.442 "uuid": "ca10e3ea-e085-41ba-a0b4-0860cd84051a", 00:16:11.442 "is_configured": true, 00:16:11.442 "data_offset": 0, 00:16:11.442 "data_size": 65536 00:16:11.442 }, 00:16:11.442 { 00:16:11.442 "name": "BaseBdev3", 00:16:11.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.442 "is_configured": false, 00:16:11.442 "data_offset": 0, 00:16:11.442 "data_size": 0 00:16:11.442 }, 00:16:11.442 { 00:16:11.442 "name": "BaseBdev4", 00:16:11.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.442 "is_configured": false, 00:16:11.442 "data_offset": 0, 00:16:11.442 "data_size": 0 00:16:11.442 } 00:16:11.442 ] 00:16:11.442 }' 00:16:11.442 18:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.442 18:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.011 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:12.011 [2024-07-24 18:19:20.533975] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.011 BaseBdev3 00:16:12.011 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:12.011 18:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:12.011 18:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:12.011 18:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:12.011 18:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:12.011 18:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:12.011 18:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.270 18:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:12.545 [ 00:16:12.545 { 00:16:12.545 "name": "BaseBdev3", 00:16:12.545 "aliases": [ 00:16:12.545 "61c2440b-e1e7-4772-aecd-50b17e70f80b" 00:16:12.545 ], 00:16:12.545 "product_name": "Malloc disk", 00:16:12.545 "block_size": 512, 00:16:12.545 "num_blocks": 65536, 00:16:12.545 "uuid": "61c2440b-e1e7-4772-aecd-50b17e70f80b", 00:16:12.545 "assigned_rate_limits": { 00:16:12.545 "rw_ios_per_sec": 0, 00:16:12.545 "rw_mbytes_per_sec": 0, 00:16:12.545 "r_mbytes_per_sec": 0, 00:16:12.545 "w_mbytes_per_sec": 0 00:16:12.545 }, 00:16:12.545 "claimed": true, 00:16:12.545 "claim_type": "exclusive_write", 00:16:12.545 "zoned": false, 00:16:12.545 "supported_io_types": { 00:16:12.545 "read": true, 00:16:12.545 "write": true, 00:16:12.545 "unmap": true, 00:16:12.545 "flush": true, 00:16:12.545 "reset": true, 00:16:12.545 "nvme_admin": false, 00:16:12.545 "nvme_io": false, 00:16:12.545 "nvme_io_md": false, 00:16:12.545 "write_zeroes": true, 00:16:12.545 "zcopy": true, 00:16:12.545 "get_zone_info": false, 00:16:12.545 "zone_management": false, 00:16:12.545 "zone_append": false, 00:16:12.545 "compare": false, 00:16:12.545 "compare_and_write": false, 00:16:12.545 "abort": true, 00:16:12.545 "seek_hole": false, 00:16:12.545 "seek_data": false, 00:16:12.545 "copy": true, 00:16:12.545 "nvme_iov_md": false 00:16:12.545 }, 00:16:12.545 "memory_domains": [ 00:16:12.545 { 00:16:12.545 "dma_device_id": "system", 00:16:12.545 "dma_device_type": 1 00:16:12.545 }, 00:16:12.545 { 00:16:12.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.545 "dma_device_type": 2 00:16:12.545 } 00:16:12.545 ], 00:16:12.545 "driver_specific": {} 00:16:12.545 } 00:16:12.545 ] 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.545 18:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.545 18:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.545 "name": "Existed_Raid", 00:16:12.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.545 "strip_size_kb": 64, 00:16:12.545 "state": "configuring", 00:16:12.545 "raid_level": "concat", 00:16:12.545 "superblock": false, 00:16:12.545 "num_base_bdevs": 4, 00:16:12.545 "num_base_bdevs_discovered": 3, 00:16:12.545 "num_base_bdevs_operational": 4, 00:16:12.545 "base_bdevs_list": [ 00:16:12.545 { 00:16:12.545 "name": "BaseBdev1", 00:16:12.545 "uuid": "0b69d5d5-835a-4dbc-96e3-d55d8ba1a096", 00:16:12.545 "is_configured": true, 00:16:12.545 "data_offset": 0, 00:16:12.545 "data_size": 65536 00:16:12.545 }, 00:16:12.545 { 00:16:12.545 "name": "BaseBdev2", 00:16:12.545 "uuid": "ca10e3ea-e085-41ba-a0b4-0860cd84051a", 00:16:12.545 "is_configured": true, 00:16:12.545 "data_offset": 0, 00:16:12.545 "data_size": 65536 00:16:12.545 }, 00:16:12.545 { 00:16:12.545 "name": "BaseBdev3", 00:16:12.545 "uuid": "61c2440b-e1e7-4772-aecd-50b17e70f80b", 00:16:12.545 "is_configured": true, 00:16:12.545 "data_offset": 0, 00:16:12.545 "data_size": 65536 00:16:12.545 }, 00:16:12.545 { 00:16:12.545 "name": "BaseBdev4", 00:16:12.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.545 "is_configured": false, 00:16:12.545 "data_offset": 0, 00:16:12.545 "data_size": 0 00:16:12.545 } 00:16:12.545 ] 00:16:12.545 }' 00:16:12.545 18:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.545 18:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.135 18:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:13.135 [2024-07-24 18:19:21.723812] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:13.135 [2024-07-24 18:19:21.723839] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x20fdab0 00:16:13.135 [2024-07-24 18:19:21.723844] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:13.135 [2024-07-24 18:19:21.723974] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22b0cd0 00:16:13.135 [2024-07-24 18:19:21.724055] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20fdab0 00:16:13.135 [2024-07-24 18:19:21.724062] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x20fdab0 00:16:13.135 [2024-07-24 18:19:21.724177] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.135 BaseBdev4 00:16:13.395 18:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:13.395 18:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:13.395 18:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:13.395 18:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:13.395 18:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:13.395 18:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:13.395 18:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:13.395 18:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:13.655 [ 00:16:13.655 { 00:16:13.655 "name": "BaseBdev4", 00:16:13.655 "aliases": [ 00:16:13.655 "0a5171b8-30df-4c18-8b5c-246059057086" 00:16:13.655 ], 00:16:13.655 "product_name": "Malloc disk", 00:16:13.655 "block_size": 512, 00:16:13.655 "num_blocks": 65536, 00:16:13.655 "uuid": "0a5171b8-30df-4c18-8b5c-246059057086", 00:16:13.655 "assigned_rate_limits": { 00:16:13.655 "rw_ios_per_sec": 0, 00:16:13.655 "rw_mbytes_per_sec": 0, 00:16:13.655 "r_mbytes_per_sec": 0, 00:16:13.655 "w_mbytes_per_sec": 0 00:16:13.655 }, 00:16:13.655 "claimed": true, 00:16:13.655 "claim_type": "exclusive_write", 00:16:13.655 "zoned": false, 00:16:13.655 "supported_io_types": { 00:16:13.655 "read": true, 00:16:13.655 "write": true, 00:16:13.655 "unmap": true, 00:16:13.655 "flush": true, 00:16:13.655 "reset": true, 00:16:13.655 "nvme_admin": false, 00:16:13.655 "nvme_io": false, 00:16:13.655 "nvme_io_md": false, 00:16:13.655 "write_zeroes": true, 00:16:13.655 "zcopy": true, 00:16:13.655 "get_zone_info": false, 00:16:13.655 "zone_management": false, 00:16:13.655 "zone_append": false, 00:16:13.655 "compare": false, 00:16:13.655 "compare_and_write": false, 00:16:13.655 "abort": true, 00:16:13.655 "seek_hole": false, 00:16:13.655 "seek_data": false, 00:16:13.655 "copy": true, 00:16:13.655 "nvme_iov_md": false 00:16:13.655 }, 00:16:13.655 "memory_domains": [ 00:16:13.655 { 00:16:13.655 "dma_device_id": "system", 00:16:13.655 "dma_device_type": 1 00:16:13.655 }, 00:16:13.655 { 00:16:13.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.655 "dma_device_type": 2 00:16:13.655 } 00:16:13.655 ], 00:16:13.655 "driver_specific": {} 00:16:13.655 } 00:16:13.655 ] 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:13.655 "name": "Existed_Raid", 00:16:13.655 "uuid": "9c71d165-a592-4695-8bca-c77cbc433b94", 00:16:13.655 "strip_size_kb": 64, 00:16:13.655 "state": "online", 00:16:13.655 "raid_level": "concat", 00:16:13.655 "superblock": false, 00:16:13.655 "num_base_bdevs": 4, 00:16:13.655 "num_base_bdevs_discovered": 4, 00:16:13.655 "num_base_bdevs_operational": 4, 00:16:13.655 "base_bdevs_list": [ 00:16:13.655 { 00:16:13.655 "name": "BaseBdev1", 00:16:13.655 "uuid": "0b69d5d5-835a-4dbc-96e3-d55d8ba1a096", 00:16:13.655 "is_configured": true, 00:16:13.655 "data_offset": 0, 00:16:13.655 "data_size": 65536 00:16:13.655 }, 00:16:13.655 { 00:16:13.655 "name": "BaseBdev2", 00:16:13.655 "uuid": "ca10e3ea-e085-41ba-a0b4-0860cd84051a", 00:16:13.655 "is_configured": true, 00:16:13.655 "data_offset": 0, 00:16:13.655 "data_size": 65536 00:16:13.655 }, 00:16:13.655 { 00:16:13.655 "name": "BaseBdev3", 00:16:13.655 "uuid": "61c2440b-e1e7-4772-aecd-50b17e70f80b", 00:16:13.655 "is_configured": true, 00:16:13.655 "data_offset": 0, 00:16:13.655 "data_size": 65536 00:16:13.655 }, 00:16:13.655 { 00:16:13.655 "name": "BaseBdev4", 00:16:13.655 "uuid": "0a5171b8-30df-4c18-8b5c-246059057086", 00:16:13.655 "is_configured": true, 00:16:13.655 "data_offset": 0, 00:16:13.655 "data_size": 65536 00:16:13.655 } 00:16:13.655 ] 00:16:13.655 }' 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:13.655 18:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.224 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:14.224 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:14.224 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:14.224 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:14.224 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:14.224 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:14.224 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:14.224 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:14.483 [2024-07-24 18:19:22.870972] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.483 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:14.483 "name": "Existed_Raid", 00:16:14.483 "aliases": [ 00:16:14.483 "9c71d165-a592-4695-8bca-c77cbc433b94" 00:16:14.483 ], 00:16:14.483 "product_name": "Raid Volume", 00:16:14.483 "block_size": 512, 00:16:14.483 "num_blocks": 262144, 00:16:14.483 "uuid": "9c71d165-a592-4695-8bca-c77cbc433b94", 00:16:14.483 "assigned_rate_limits": { 00:16:14.483 "rw_ios_per_sec": 0, 00:16:14.483 "rw_mbytes_per_sec": 0, 00:16:14.483 "r_mbytes_per_sec": 0, 00:16:14.483 "w_mbytes_per_sec": 0 00:16:14.483 }, 00:16:14.483 "claimed": false, 00:16:14.483 "zoned": false, 00:16:14.483 "supported_io_types": { 00:16:14.483 "read": true, 00:16:14.483 "write": true, 00:16:14.483 "unmap": true, 00:16:14.483 "flush": true, 00:16:14.483 "reset": true, 00:16:14.483 "nvme_admin": false, 00:16:14.483 "nvme_io": false, 00:16:14.483 "nvme_io_md": false, 00:16:14.483 "write_zeroes": true, 00:16:14.483 "zcopy": false, 00:16:14.483 "get_zone_info": false, 00:16:14.483 "zone_management": false, 00:16:14.483 "zone_append": false, 00:16:14.483 "compare": false, 00:16:14.483 "compare_and_write": false, 00:16:14.483 "abort": false, 00:16:14.483 "seek_hole": false, 00:16:14.483 "seek_data": false, 00:16:14.483 "copy": false, 00:16:14.483 "nvme_iov_md": false 00:16:14.483 }, 00:16:14.483 "memory_domains": [ 00:16:14.483 { 00:16:14.483 "dma_device_id": "system", 00:16:14.483 "dma_device_type": 1 00:16:14.483 }, 00:16:14.483 { 00:16:14.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.483 "dma_device_type": 2 00:16:14.483 }, 00:16:14.483 { 00:16:14.483 "dma_device_id": "system", 00:16:14.483 "dma_device_type": 1 00:16:14.483 }, 00:16:14.483 { 00:16:14.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.483 "dma_device_type": 2 00:16:14.483 }, 00:16:14.483 { 00:16:14.483 "dma_device_id": "system", 00:16:14.483 "dma_device_type": 1 00:16:14.483 }, 00:16:14.483 { 00:16:14.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.483 "dma_device_type": 2 00:16:14.483 }, 00:16:14.483 { 00:16:14.483 "dma_device_id": "system", 00:16:14.483 "dma_device_type": 1 00:16:14.483 }, 00:16:14.483 { 00:16:14.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.483 "dma_device_type": 2 00:16:14.483 } 00:16:14.483 ], 00:16:14.483 "driver_specific": { 00:16:14.483 "raid": { 00:16:14.483 "uuid": "9c71d165-a592-4695-8bca-c77cbc433b94", 00:16:14.483 "strip_size_kb": 64, 00:16:14.483 "state": "online", 00:16:14.483 "raid_level": "concat", 00:16:14.483 "superblock": false, 00:16:14.483 "num_base_bdevs": 4, 00:16:14.483 "num_base_bdevs_discovered": 4, 00:16:14.483 "num_base_bdevs_operational": 4, 00:16:14.483 "base_bdevs_list": [ 00:16:14.483 { 00:16:14.483 "name": "BaseBdev1", 00:16:14.483 "uuid": "0b69d5d5-835a-4dbc-96e3-d55d8ba1a096", 00:16:14.483 "is_configured": true, 00:16:14.483 "data_offset": 0, 00:16:14.483 "data_size": 65536 00:16:14.483 }, 00:16:14.483 { 00:16:14.483 "name": "BaseBdev2", 00:16:14.483 "uuid": "ca10e3ea-e085-41ba-a0b4-0860cd84051a", 00:16:14.483 "is_configured": true, 00:16:14.483 "data_offset": 0, 00:16:14.483 "data_size": 65536 00:16:14.483 }, 00:16:14.483 { 00:16:14.483 "name": "BaseBdev3", 00:16:14.483 "uuid": "61c2440b-e1e7-4772-aecd-50b17e70f80b", 00:16:14.483 "is_configured": true, 00:16:14.483 "data_offset": 0, 00:16:14.483 "data_size": 65536 00:16:14.483 }, 00:16:14.483 { 00:16:14.483 "name": "BaseBdev4", 00:16:14.483 "uuid": "0a5171b8-30df-4c18-8b5c-246059057086", 00:16:14.483 "is_configured": true, 00:16:14.483 "data_offset": 0, 00:16:14.483 "data_size": 65536 00:16:14.483 } 00:16:14.483 ] 00:16:14.483 } 00:16:14.483 } 00:16:14.483 }' 00:16:14.483 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:14.483 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:14.483 BaseBdev2 00:16:14.483 BaseBdev3 00:16:14.483 BaseBdev4' 00:16:14.483 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:14.483 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:14.483 18:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:14.743 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:14.743 "name": "BaseBdev1", 00:16:14.743 "aliases": [ 00:16:14.743 "0b69d5d5-835a-4dbc-96e3-d55d8ba1a096" 00:16:14.743 ], 00:16:14.743 "product_name": "Malloc disk", 00:16:14.743 "block_size": 512, 00:16:14.743 "num_blocks": 65536, 00:16:14.743 "uuid": "0b69d5d5-835a-4dbc-96e3-d55d8ba1a096", 00:16:14.743 "assigned_rate_limits": { 00:16:14.743 "rw_ios_per_sec": 0, 00:16:14.743 "rw_mbytes_per_sec": 0, 00:16:14.743 "r_mbytes_per_sec": 0, 00:16:14.743 "w_mbytes_per_sec": 0 00:16:14.743 }, 00:16:14.743 "claimed": true, 00:16:14.743 "claim_type": "exclusive_write", 00:16:14.743 "zoned": false, 00:16:14.743 "supported_io_types": { 00:16:14.743 "read": true, 00:16:14.743 "write": true, 00:16:14.743 "unmap": true, 00:16:14.743 "flush": true, 00:16:14.743 "reset": true, 00:16:14.743 "nvme_admin": false, 00:16:14.743 "nvme_io": false, 00:16:14.743 "nvme_io_md": false, 00:16:14.743 "write_zeroes": true, 00:16:14.743 "zcopy": true, 00:16:14.743 "get_zone_info": false, 00:16:14.743 "zone_management": false, 00:16:14.743 "zone_append": false, 00:16:14.743 "compare": false, 00:16:14.743 "compare_and_write": false, 00:16:14.743 "abort": true, 00:16:14.743 "seek_hole": false, 00:16:14.743 "seek_data": false, 00:16:14.743 "copy": true, 00:16:14.743 "nvme_iov_md": false 00:16:14.743 }, 00:16:14.743 "memory_domains": [ 00:16:14.743 { 00:16:14.743 "dma_device_id": "system", 00:16:14.743 "dma_device_type": 1 00:16:14.743 }, 00:16:14.743 { 00:16:14.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.743 "dma_device_type": 2 00:16:14.743 } 00:16:14.743 ], 00:16:14.743 "driver_specific": {} 00:16:14.743 }' 00:16:14.743 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:14.743 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:14.743 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:14.743 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:14.743 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:14.743 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:14.743 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:14.743 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:14.743 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:14.743 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.003 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.003 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:15.003 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:15.003 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:15.003 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:15.003 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:15.003 "name": "BaseBdev2", 00:16:15.003 "aliases": [ 00:16:15.003 "ca10e3ea-e085-41ba-a0b4-0860cd84051a" 00:16:15.003 ], 00:16:15.003 "product_name": "Malloc disk", 00:16:15.003 "block_size": 512, 00:16:15.003 "num_blocks": 65536, 00:16:15.003 "uuid": "ca10e3ea-e085-41ba-a0b4-0860cd84051a", 00:16:15.003 "assigned_rate_limits": { 00:16:15.003 "rw_ios_per_sec": 0, 00:16:15.003 "rw_mbytes_per_sec": 0, 00:16:15.003 "r_mbytes_per_sec": 0, 00:16:15.003 "w_mbytes_per_sec": 0 00:16:15.003 }, 00:16:15.003 "claimed": true, 00:16:15.003 "claim_type": "exclusive_write", 00:16:15.003 "zoned": false, 00:16:15.003 "supported_io_types": { 00:16:15.003 "read": true, 00:16:15.003 "write": true, 00:16:15.003 "unmap": true, 00:16:15.003 "flush": true, 00:16:15.003 "reset": true, 00:16:15.003 "nvme_admin": false, 00:16:15.003 "nvme_io": false, 00:16:15.003 "nvme_io_md": false, 00:16:15.003 "write_zeroes": true, 00:16:15.003 "zcopy": true, 00:16:15.003 "get_zone_info": false, 00:16:15.003 "zone_management": false, 00:16:15.003 "zone_append": false, 00:16:15.003 "compare": false, 00:16:15.003 "compare_and_write": false, 00:16:15.003 "abort": true, 00:16:15.003 "seek_hole": false, 00:16:15.003 "seek_data": false, 00:16:15.003 "copy": true, 00:16:15.003 "nvme_iov_md": false 00:16:15.003 }, 00:16:15.003 "memory_domains": [ 00:16:15.003 { 00:16:15.003 "dma_device_id": "system", 00:16:15.003 "dma_device_type": 1 00:16:15.003 }, 00:16:15.003 { 00:16:15.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.003 "dma_device_type": 2 00:16:15.003 } 00:16:15.003 ], 00:16:15.003 "driver_specific": {} 00:16:15.003 }' 00:16:15.003 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.262 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.262 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:15.262 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.262 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.262 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:15.262 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.262 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.262 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:15.262 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.262 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.521 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:15.521 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:15.521 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:15.521 18:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:15.521 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:15.521 "name": "BaseBdev3", 00:16:15.521 "aliases": [ 00:16:15.521 "61c2440b-e1e7-4772-aecd-50b17e70f80b" 00:16:15.521 ], 00:16:15.521 "product_name": "Malloc disk", 00:16:15.521 "block_size": 512, 00:16:15.521 "num_blocks": 65536, 00:16:15.521 "uuid": "61c2440b-e1e7-4772-aecd-50b17e70f80b", 00:16:15.521 "assigned_rate_limits": { 00:16:15.521 "rw_ios_per_sec": 0, 00:16:15.521 "rw_mbytes_per_sec": 0, 00:16:15.521 "r_mbytes_per_sec": 0, 00:16:15.521 "w_mbytes_per_sec": 0 00:16:15.521 }, 00:16:15.521 "claimed": true, 00:16:15.521 "claim_type": "exclusive_write", 00:16:15.521 "zoned": false, 00:16:15.521 "supported_io_types": { 00:16:15.521 "read": true, 00:16:15.521 "write": true, 00:16:15.521 "unmap": true, 00:16:15.521 "flush": true, 00:16:15.521 "reset": true, 00:16:15.521 "nvme_admin": false, 00:16:15.521 "nvme_io": false, 00:16:15.521 "nvme_io_md": false, 00:16:15.521 "write_zeroes": true, 00:16:15.521 "zcopy": true, 00:16:15.522 "get_zone_info": false, 00:16:15.522 "zone_management": false, 00:16:15.522 "zone_append": false, 00:16:15.522 "compare": false, 00:16:15.522 "compare_and_write": false, 00:16:15.522 "abort": true, 00:16:15.522 "seek_hole": false, 00:16:15.522 "seek_data": false, 00:16:15.522 "copy": true, 00:16:15.522 "nvme_iov_md": false 00:16:15.522 }, 00:16:15.522 "memory_domains": [ 00:16:15.522 { 00:16:15.522 "dma_device_id": "system", 00:16:15.522 "dma_device_type": 1 00:16:15.522 }, 00:16:15.522 { 00:16:15.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.522 "dma_device_type": 2 00:16:15.522 } 00:16:15.522 ], 00:16:15.522 "driver_specific": {} 00:16:15.522 }' 00:16:15.522 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.522 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.522 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:15.522 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.781 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.781 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:15.781 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.781 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.781 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:15.781 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.781 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.781 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:15.781 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:15.781 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:15.781 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:16.040 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:16.040 "name": "BaseBdev4", 00:16:16.040 "aliases": [ 00:16:16.040 "0a5171b8-30df-4c18-8b5c-246059057086" 00:16:16.040 ], 00:16:16.040 "product_name": "Malloc disk", 00:16:16.040 "block_size": 512, 00:16:16.040 "num_blocks": 65536, 00:16:16.040 "uuid": "0a5171b8-30df-4c18-8b5c-246059057086", 00:16:16.040 "assigned_rate_limits": { 00:16:16.040 "rw_ios_per_sec": 0, 00:16:16.040 "rw_mbytes_per_sec": 0, 00:16:16.040 "r_mbytes_per_sec": 0, 00:16:16.040 "w_mbytes_per_sec": 0 00:16:16.040 }, 00:16:16.041 "claimed": true, 00:16:16.041 "claim_type": "exclusive_write", 00:16:16.041 "zoned": false, 00:16:16.041 "supported_io_types": { 00:16:16.041 "read": true, 00:16:16.041 "write": true, 00:16:16.041 "unmap": true, 00:16:16.041 "flush": true, 00:16:16.041 "reset": true, 00:16:16.041 "nvme_admin": false, 00:16:16.041 "nvme_io": false, 00:16:16.041 "nvme_io_md": false, 00:16:16.041 "write_zeroes": true, 00:16:16.041 "zcopy": true, 00:16:16.041 "get_zone_info": false, 00:16:16.041 "zone_management": false, 00:16:16.041 "zone_append": false, 00:16:16.041 "compare": false, 00:16:16.041 "compare_and_write": false, 00:16:16.041 "abort": true, 00:16:16.041 "seek_hole": false, 00:16:16.041 "seek_data": false, 00:16:16.041 "copy": true, 00:16:16.041 "nvme_iov_md": false 00:16:16.041 }, 00:16:16.041 "memory_domains": [ 00:16:16.041 { 00:16:16.041 "dma_device_id": "system", 00:16:16.041 "dma_device_type": 1 00:16:16.041 }, 00:16:16.041 { 00:16:16.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.041 "dma_device_type": 2 00:16:16.041 } 00:16:16.041 ], 00:16:16.041 "driver_specific": {} 00:16:16.041 }' 00:16:16.041 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.041 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.041 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:16.041 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.300 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.300 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:16.300 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.300 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.300 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:16.300 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.300 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.300 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:16.300 18:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:16.559 [2024-07-24 18:19:24.996284] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.559 [2024-07-24 18:19:24.996303] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.559 [2024-07-24 18:19:24.996342] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.559 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.819 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:16.819 "name": "Existed_Raid", 00:16:16.819 "uuid": "9c71d165-a592-4695-8bca-c77cbc433b94", 00:16:16.819 "strip_size_kb": 64, 00:16:16.819 "state": "offline", 00:16:16.819 "raid_level": "concat", 00:16:16.819 "superblock": false, 00:16:16.819 "num_base_bdevs": 4, 00:16:16.819 "num_base_bdevs_discovered": 3, 00:16:16.819 "num_base_bdevs_operational": 3, 00:16:16.819 "base_bdevs_list": [ 00:16:16.819 { 00:16:16.819 "name": null, 00:16:16.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.819 "is_configured": false, 00:16:16.819 "data_offset": 0, 00:16:16.819 "data_size": 65536 00:16:16.819 }, 00:16:16.819 { 00:16:16.819 "name": "BaseBdev2", 00:16:16.819 "uuid": "ca10e3ea-e085-41ba-a0b4-0860cd84051a", 00:16:16.819 "is_configured": true, 00:16:16.819 "data_offset": 0, 00:16:16.819 "data_size": 65536 00:16:16.819 }, 00:16:16.819 { 00:16:16.819 "name": "BaseBdev3", 00:16:16.819 "uuid": "61c2440b-e1e7-4772-aecd-50b17e70f80b", 00:16:16.819 "is_configured": true, 00:16:16.819 "data_offset": 0, 00:16:16.819 "data_size": 65536 00:16:16.819 }, 00:16:16.819 { 00:16:16.819 "name": "BaseBdev4", 00:16:16.819 "uuid": "0a5171b8-30df-4c18-8b5c-246059057086", 00:16:16.819 "is_configured": true, 00:16:16.819 "data_offset": 0, 00:16:16.819 "data_size": 65536 00:16:16.819 } 00:16:16.819 ] 00:16:16.819 }' 00:16:16.819 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:16.819 18:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.079 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:17.079 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:17.079 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.079 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:17.338 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:17.338 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:17.338 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:17.597 [2024-07-24 18:19:25.971665] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:17.597 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:17.597 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:17.597 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.597 18:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:17.597 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:17.597 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:17.597 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:17.856 [2024-07-24 18:19:26.310101] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:17.856 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:17.856 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:17.856 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.856 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:18.116 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:18.116 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:18.116 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:18.116 [2024-07-24 18:19:26.636368] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:18.116 [2024-07-24 18:19:26.636399] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20fdab0 name Existed_Raid, state offline 00:16:18.116 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:18.116 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:18.116 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.116 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:18.376 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:18.376 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:18.376 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:18.376 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:18.376 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:18.376 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:18.635 BaseBdev2 00:16:18.635 18:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:18.635 18:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:18.635 18:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:18.635 18:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:18.635 18:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:18.635 18:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:18.635 18:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:18.635 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:18.895 [ 00:16:18.895 { 00:16:18.895 "name": "BaseBdev2", 00:16:18.895 "aliases": [ 00:16:18.895 "29804d17-5cea-47be-b9a2-84b6d78903f0" 00:16:18.895 ], 00:16:18.895 "product_name": "Malloc disk", 00:16:18.895 "block_size": 512, 00:16:18.895 "num_blocks": 65536, 00:16:18.895 "uuid": "29804d17-5cea-47be-b9a2-84b6d78903f0", 00:16:18.895 "assigned_rate_limits": { 00:16:18.895 "rw_ios_per_sec": 0, 00:16:18.895 "rw_mbytes_per_sec": 0, 00:16:18.895 "r_mbytes_per_sec": 0, 00:16:18.895 "w_mbytes_per_sec": 0 00:16:18.895 }, 00:16:18.895 "claimed": false, 00:16:18.895 "zoned": false, 00:16:18.895 "supported_io_types": { 00:16:18.895 "read": true, 00:16:18.895 "write": true, 00:16:18.895 "unmap": true, 00:16:18.895 "flush": true, 00:16:18.895 "reset": true, 00:16:18.895 "nvme_admin": false, 00:16:18.895 "nvme_io": false, 00:16:18.895 "nvme_io_md": false, 00:16:18.895 "write_zeroes": true, 00:16:18.895 "zcopy": true, 00:16:18.895 "get_zone_info": false, 00:16:18.895 "zone_management": false, 00:16:18.895 "zone_append": false, 00:16:18.895 "compare": false, 00:16:18.895 "compare_and_write": false, 00:16:18.895 "abort": true, 00:16:18.895 "seek_hole": false, 00:16:18.895 "seek_data": false, 00:16:18.895 "copy": true, 00:16:18.895 "nvme_iov_md": false 00:16:18.895 }, 00:16:18.895 "memory_domains": [ 00:16:18.895 { 00:16:18.895 "dma_device_id": "system", 00:16:18.895 "dma_device_type": 1 00:16:18.895 }, 00:16:18.895 { 00:16:18.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.895 "dma_device_type": 2 00:16:18.895 } 00:16:18.895 ], 00:16:18.895 "driver_specific": {} 00:16:18.895 } 00:16:18.895 ] 00:16:18.895 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:18.895 18:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:18.895 18:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:18.895 18:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:18.895 BaseBdev3 00:16:18.895 18:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:18.895 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:18.895 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:18.895 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:18.895 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:18.895 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:18.895 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.155 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:19.414 [ 00:16:19.414 { 00:16:19.414 "name": "BaseBdev3", 00:16:19.414 "aliases": [ 00:16:19.414 "b3e06b06-2574-4853-9132-97cf4012a010" 00:16:19.414 ], 00:16:19.414 "product_name": "Malloc disk", 00:16:19.414 "block_size": 512, 00:16:19.414 "num_blocks": 65536, 00:16:19.414 "uuid": "b3e06b06-2574-4853-9132-97cf4012a010", 00:16:19.414 "assigned_rate_limits": { 00:16:19.414 "rw_ios_per_sec": 0, 00:16:19.414 "rw_mbytes_per_sec": 0, 00:16:19.414 "r_mbytes_per_sec": 0, 00:16:19.414 "w_mbytes_per_sec": 0 00:16:19.414 }, 00:16:19.414 "claimed": false, 00:16:19.414 "zoned": false, 00:16:19.414 "supported_io_types": { 00:16:19.414 "read": true, 00:16:19.414 "write": true, 00:16:19.414 "unmap": true, 00:16:19.414 "flush": true, 00:16:19.414 "reset": true, 00:16:19.414 "nvme_admin": false, 00:16:19.414 "nvme_io": false, 00:16:19.414 "nvme_io_md": false, 00:16:19.414 "write_zeroes": true, 00:16:19.414 "zcopy": true, 00:16:19.414 "get_zone_info": false, 00:16:19.414 "zone_management": false, 00:16:19.414 "zone_append": false, 00:16:19.414 "compare": false, 00:16:19.414 "compare_and_write": false, 00:16:19.414 "abort": true, 00:16:19.414 "seek_hole": false, 00:16:19.414 "seek_data": false, 00:16:19.414 "copy": true, 00:16:19.414 "nvme_iov_md": false 00:16:19.414 }, 00:16:19.414 "memory_domains": [ 00:16:19.414 { 00:16:19.414 "dma_device_id": "system", 00:16:19.414 "dma_device_type": 1 00:16:19.414 }, 00:16:19.414 { 00:16:19.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.414 "dma_device_type": 2 00:16:19.414 } 00:16:19.414 ], 00:16:19.414 "driver_specific": {} 00:16:19.414 } 00:16:19.414 ] 00:16:19.414 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:19.414 18:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:19.414 18:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:19.414 18:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:19.414 BaseBdev4 00:16:19.414 18:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:19.414 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:19.414 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:19.414 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:19.414 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:19.414 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:19.414 18:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.674 18:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:19.933 [ 00:16:19.933 { 00:16:19.933 "name": "BaseBdev4", 00:16:19.933 "aliases": [ 00:16:19.933 "b73f3e16-3885-4990-8c68-ac52fac49bdc" 00:16:19.933 ], 00:16:19.933 "product_name": "Malloc disk", 00:16:19.933 "block_size": 512, 00:16:19.933 "num_blocks": 65536, 00:16:19.933 "uuid": "b73f3e16-3885-4990-8c68-ac52fac49bdc", 00:16:19.933 "assigned_rate_limits": { 00:16:19.933 "rw_ios_per_sec": 0, 00:16:19.933 "rw_mbytes_per_sec": 0, 00:16:19.933 "r_mbytes_per_sec": 0, 00:16:19.933 "w_mbytes_per_sec": 0 00:16:19.933 }, 00:16:19.933 "claimed": false, 00:16:19.933 "zoned": false, 00:16:19.933 "supported_io_types": { 00:16:19.933 "read": true, 00:16:19.933 "write": true, 00:16:19.933 "unmap": true, 00:16:19.933 "flush": true, 00:16:19.933 "reset": true, 00:16:19.933 "nvme_admin": false, 00:16:19.933 "nvme_io": false, 00:16:19.933 "nvme_io_md": false, 00:16:19.933 "write_zeroes": true, 00:16:19.933 "zcopy": true, 00:16:19.933 "get_zone_info": false, 00:16:19.933 "zone_management": false, 00:16:19.933 "zone_append": false, 00:16:19.933 "compare": false, 00:16:19.933 "compare_and_write": false, 00:16:19.933 "abort": true, 00:16:19.933 "seek_hole": false, 00:16:19.933 "seek_data": false, 00:16:19.933 "copy": true, 00:16:19.933 "nvme_iov_md": false 00:16:19.933 }, 00:16:19.933 "memory_domains": [ 00:16:19.933 { 00:16:19.933 "dma_device_id": "system", 00:16:19.933 "dma_device_type": 1 00:16:19.933 }, 00:16:19.933 { 00:16:19.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.933 "dma_device_type": 2 00:16:19.933 } 00:16:19.933 ], 00:16:19.933 "driver_specific": {} 00:16:19.933 } 00:16:19.933 ] 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:19.933 [2024-07-24 18:19:28.450120] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.933 [2024-07-24 18:19:28.450149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.933 [2024-07-24 18:19:28.450162] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.933 [2024-07-24 18:19:28.451102] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.933 [2024-07-24 18:19:28.451131] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.933 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.193 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.193 "name": "Existed_Raid", 00:16:20.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.193 "strip_size_kb": 64, 00:16:20.193 "state": "configuring", 00:16:20.193 "raid_level": "concat", 00:16:20.193 "superblock": false, 00:16:20.193 "num_base_bdevs": 4, 00:16:20.193 "num_base_bdevs_discovered": 3, 00:16:20.193 "num_base_bdevs_operational": 4, 00:16:20.193 "base_bdevs_list": [ 00:16:20.193 { 00:16:20.193 "name": "BaseBdev1", 00:16:20.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.193 "is_configured": false, 00:16:20.193 "data_offset": 0, 00:16:20.193 "data_size": 0 00:16:20.193 }, 00:16:20.193 { 00:16:20.193 "name": "BaseBdev2", 00:16:20.193 "uuid": "29804d17-5cea-47be-b9a2-84b6d78903f0", 00:16:20.193 "is_configured": true, 00:16:20.193 "data_offset": 0, 00:16:20.193 "data_size": 65536 00:16:20.193 }, 00:16:20.193 { 00:16:20.193 "name": "BaseBdev3", 00:16:20.193 "uuid": "b3e06b06-2574-4853-9132-97cf4012a010", 00:16:20.193 "is_configured": true, 00:16:20.193 "data_offset": 0, 00:16:20.193 "data_size": 65536 00:16:20.193 }, 00:16:20.193 { 00:16:20.193 "name": "BaseBdev4", 00:16:20.193 "uuid": "b73f3e16-3885-4990-8c68-ac52fac49bdc", 00:16:20.193 "is_configured": true, 00:16:20.193 "data_offset": 0, 00:16:20.193 "data_size": 65536 00:16:20.193 } 00:16:20.193 ] 00:16:20.193 }' 00:16:20.193 18:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.193 18:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.788 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:20.788 [2024-07-24 18:19:29.280269] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:20.788 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:20.788 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:20.788 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:20.788 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:20.788 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:20.788 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:20.788 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.788 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.788 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.788 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.789 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.789 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.047 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.047 "name": "Existed_Raid", 00:16:21.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.047 "strip_size_kb": 64, 00:16:21.047 "state": "configuring", 00:16:21.047 "raid_level": "concat", 00:16:21.047 "superblock": false, 00:16:21.047 "num_base_bdevs": 4, 00:16:21.047 "num_base_bdevs_discovered": 2, 00:16:21.047 "num_base_bdevs_operational": 4, 00:16:21.047 "base_bdevs_list": [ 00:16:21.047 { 00:16:21.047 "name": "BaseBdev1", 00:16:21.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.047 "is_configured": false, 00:16:21.047 "data_offset": 0, 00:16:21.047 "data_size": 0 00:16:21.047 }, 00:16:21.047 { 00:16:21.047 "name": null, 00:16:21.047 "uuid": "29804d17-5cea-47be-b9a2-84b6d78903f0", 00:16:21.047 "is_configured": false, 00:16:21.047 "data_offset": 0, 00:16:21.047 "data_size": 65536 00:16:21.047 }, 00:16:21.047 { 00:16:21.047 "name": "BaseBdev3", 00:16:21.047 "uuid": "b3e06b06-2574-4853-9132-97cf4012a010", 00:16:21.047 "is_configured": true, 00:16:21.047 "data_offset": 0, 00:16:21.047 "data_size": 65536 00:16:21.047 }, 00:16:21.047 { 00:16:21.047 "name": "BaseBdev4", 00:16:21.047 "uuid": "b73f3e16-3885-4990-8c68-ac52fac49bdc", 00:16:21.047 "is_configured": true, 00:16:21.047 "data_offset": 0, 00:16:21.047 "data_size": 65536 00:16:21.047 } 00:16:21.047 ] 00:16:21.047 }' 00:16:21.047 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.047 18:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.615 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.615 18:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:21.615 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:21.615 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:21.874 [2024-07-24 18:19:30.257591] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.874 BaseBdev1 00:16:21.874 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:21.874 18:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:21.874 18:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:21.874 18:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:21.874 18:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:21.874 18:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:21.874 18:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:21.874 18:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.133 [ 00:16:22.133 { 00:16:22.133 "name": "BaseBdev1", 00:16:22.133 "aliases": [ 00:16:22.133 "1d608663-bdbb-4aa7-9dc4-c761472f816e" 00:16:22.133 ], 00:16:22.133 "product_name": "Malloc disk", 00:16:22.133 "block_size": 512, 00:16:22.133 "num_blocks": 65536, 00:16:22.133 "uuid": "1d608663-bdbb-4aa7-9dc4-c761472f816e", 00:16:22.133 "assigned_rate_limits": { 00:16:22.133 "rw_ios_per_sec": 0, 00:16:22.133 "rw_mbytes_per_sec": 0, 00:16:22.133 "r_mbytes_per_sec": 0, 00:16:22.133 "w_mbytes_per_sec": 0 00:16:22.133 }, 00:16:22.133 "claimed": true, 00:16:22.133 "claim_type": "exclusive_write", 00:16:22.134 "zoned": false, 00:16:22.134 "supported_io_types": { 00:16:22.134 "read": true, 00:16:22.134 "write": true, 00:16:22.134 "unmap": true, 00:16:22.134 "flush": true, 00:16:22.134 "reset": true, 00:16:22.134 "nvme_admin": false, 00:16:22.134 "nvme_io": false, 00:16:22.134 "nvme_io_md": false, 00:16:22.134 "write_zeroes": true, 00:16:22.134 "zcopy": true, 00:16:22.134 "get_zone_info": false, 00:16:22.134 "zone_management": false, 00:16:22.134 "zone_append": false, 00:16:22.134 "compare": false, 00:16:22.134 "compare_and_write": false, 00:16:22.134 "abort": true, 00:16:22.134 "seek_hole": false, 00:16:22.134 "seek_data": false, 00:16:22.134 "copy": true, 00:16:22.134 "nvme_iov_md": false 00:16:22.134 }, 00:16:22.134 "memory_domains": [ 00:16:22.134 { 00:16:22.134 "dma_device_id": "system", 00:16:22.134 "dma_device_type": 1 00:16:22.134 }, 00:16:22.134 { 00:16:22.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.134 "dma_device_type": 2 00:16:22.134 } 00:16:22.134 ], 00:16:22.134 "driver_specific": {} 00:16:22.134 } 00:16:22.134 ] 00:16:22.134 18:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:22.134 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:22.134 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:22.134 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:22.134 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:22.134 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:22.134 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:22.134 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.134 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.134 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.134 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.134 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.134 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.394 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.394 "name": "Existed_Raid", 00:16:22.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.394 "strip_size_kb": 64, 00:16:22.394 "state": "configuring", 00:16:22.394 "raid_level": "concat", 00:16:22.394 "superblock": false, 00:16:22.394 "num_base_bdevs": 4, 00:16:22.394 "num_base_bdevs_discovered": 3, 00:16:22.394 "num_base_bdevs_operational": 4, 00:16:22.394 "base_bdevs_list": [ 00:16:22.394 { 00:16:22.394 "name": "BaseBdev1", 00:16:22.394 "uuid": "1d608663-bdbb-4aa7-9dc4-c761472f816e", 00:16:22.394 "is_configured": true, 00:16:22.394 "data_offset": 0, 00:16:22.394 "data_size": 65536 00:16:22.394 }, 00:16:22.394 { 00:16:22.394 "name": null, 00:16:22.394 "uuid": "29804d17-5cea-47be-b9a2-84b6d78903f0", 00:16:22.394 "is_configured": false, 00:16:22.394 "data_offset": 0, 00:16:22.394 "data_size": 65536 00:16:22.394 }, 00:16:22.394 { 00:16:22.394 "name": "BaseBdev3", 00:16:22.394 "uuid": "b3e06b06-2574-4853-9132-97cf4012a010", 00:16:22.394 "is_configured": true, 00:16:22.394 "data_offset": 0, 00:16:22.394 "data_size": 65536 00:16:22.394 }, 00:16:22.394 { 00:16:22.394 "name": "BaseBdev4", 00:16:22.394 "uuid": "b73f3e16-3885-4990-8c68-ac52fac49bdc", 00:16:22.394 "is_configured": true, 00:16:22.394 "data_offset": 0, 00:16:22.394 "data_size": 65536 00:16:22.394 } 00:16:22.394 ] 00:16:22.394 }' 00:16:22.394 18:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.394 18:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.653 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.653 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:22.912 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:22.912 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:23.171 [2024-07-24 18:19:31.564977] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.171 "name": "Existed_Raid", 00:16:23.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.171 "strip_size_kb": 64, 00:16:23.171 "state": "configuring", 00:16:23.171 "raid_level": "concat", 00:16:23.171 "superblock": false, 00:16:23.171 "num_base_bdevs": 4, 00:16:23.171 "num_base_bdevs_discovered": 2, 00:16:23.171 "num_base_bdevs_operational": 4, 00:16:23.171 "base_bdevs_list": [ 00:16:23.171 { 00:16:23.171 "name": "BaseBdev1", 00:16:23.171 "uuid": "1d608663-bdbb-4aa7-9dc4-c761472f816e", 00:16:23.171 "is_configured": true, 00:16:23.171 "data_offset": 0, 00:16:23.171 "data_size": 65536 00:16:23.171 }, 00:16:23.171 { 00:16:23.171 "name": null, 00:16:23.171 "uuid": "29804d17-5cea-47be-b9a2-84b6d78903f0", 00:16:23.171 "is_configured": false, 00:16:23.171 "data_offset": 0, 00:16:23.171 "data_size": 65536 00:16:23.171 }, 00:16:23.171 { 00:16:23.171 "name": null, 00:16:23.171 "uuid": "b3e06b06-2574-4853-9132-97cf4012a010", 00:16:23.171 "is_configured": false, 00:16:23.171 "data_offset": 0, 00:16:23.171 "data_size": 65536 00:16:23.171 }, 00:16:23.171 { 00:16:23.171 "name": "BaseBdev4", 00:16:23.171 "uuid": "b73f3e16-3885-4990-8c68-ac52fac49bdc", 00:16:23.171 "is_configured": true, 00:16:23.171 "data_offset": 0, 00:16:23.171 "data_size": 65536 00:16:23.171 } 00:16:23.171 ] 00:16:23.171 }' 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.171 18:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.740 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.740 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.999 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:23.999 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:23.999 [2024-07-24 18:19:32.575597] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:23.999 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:23.999 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:23.999 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:23.999 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:23.999 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:23.999 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:23.999 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.999 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.999 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.999 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.999 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.259 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.259 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:24.259 "name": "Existed_Raid", 00:16:24.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.259 "strip_size_kb": 64, 00:16:24.259 "state": "configuring", 00:16:24.259 "raid_level": "concat", 00:16:24.259 "superblock": false, 00:16:24.259 "num_base_bdevs": 4, 00:16:24.259 "num_base_bdevs_discovered": 3, 00:16:24.259 "num_base_bdevs_operational": 4, 00:16:24.259 "base_bdevs_list": [ 00:16:24.259 { 00:16:24.259 "name": "BaseBdev1", 00:16:24.259 "uuid": "1d608663-bdbb-4aa7-9dc4-c761472f816e", 00:16:24.259 "is_configured": true, 00:16:24.259 "data_offset": 0, 00:16:24.259 "data_size": 65536 00:16:24.259 }, 00:16:24.259 { 00:16:24.259 "name": null, 00:16:24.259 "uuid": "29804d17-5cea-47be-b9a2-84b6d78903f0", 00:16:24.259 "is_configured": false, 00:16:24.259 "data_offset": 0, 00:16:24.259 "data_size": 65536 00:16:24.259 }, 00:16:24.259 { 00:16:24.259 "name": "BaseBdev3", 00:16:24.259 "uuid": "b3e06b06-2574-4853-9132-97cf4012a010", 00:16:24.259 "is_configured": true, 00:16:24.259 "data_offset": 0, 00:16:24.259 "data_size": 65536 00:16:24.259 }, 00:16:24.259 { 00:16:24.259 "name": "BaseBdev4", 00:16:24.259 "uuid": "b73f3e16-3885-4990-8c68-ac52fac49bdc", 00:16:24.259 "is_configured": true, 00:16:24.259 "data_offset": 0, 00:16:24.259 "data_size": 65536 00:16:24.259 } 00:16:24.259 ] 00:16:24.259 }' 00:16:24.259 18:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:24.259 18:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.826 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:24.826 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.826 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:24.826 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:25.084 [2024-07-24 18:19:33.518051] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:25.084 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:25.084 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:25.084 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:25.084 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:25.084 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:25.084 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:25.084 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:25.084 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:25.084 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:25.084 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:25.084 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.084 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.343 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.343 "name": "Existed_Raid", 00:16:25.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.343 "strip_size_kb": 64, 00:16:25.343 "state": "configuring", 00:16:25.343 "raid_level": "concat", 00:16:25.343 "superblock": false, 00:16:25.343 "num_base_bdevs": 4, 00:16:25.343 "num_base_bdevs_discovered": 2, 00:16:25.343 "num_base_bdevs_operational": 4, 00:16:25.343 "base_bdevs_list": [ 00:16:25.343 { 00:16:25.343 "name": null, 00:16:25.343 "uuid": "1d608663-bdbb-4aa7-9dc4-c761472f816e", 00:16:25.343 "is_configured": false, 00:16:25.343 "data_offset": 0, 00:16:25.343 "data_size": 65536 00:16:25.343 }, 00:16:25.343 { 00:16:25.343 "name": null, 00:16:25.343 "uuid": "29804d17-5cea-47be-b9a2-84b6d78903f0", 00:16:25.343 "is_configured": false, 00:16:25.343 "data_offset": 0, 00:16:25.343 "data_size": 65536 00:16:25.343 }, 00:16:25.343 { 00:16:25.343 "name": "BaseBdev3", 00:16:25.343 "uuid": "b3e06b06-2574-4853-9132-97cf4012a010", 00:16:25.343 "is_configured": true, 00:16:25.343 "data_offset": 0, 00:16:25.343 "data_size": 65536 00:16:25.343 }, 00:16:25.343 { 00:16:25.343 "name": "BaseBdev4", 00:16:25.343 "uuid": "b73f3e16-3885-4990-8c68-ac52fac49bdc", 00:16:25.343 "is_configured": true, 00:16:25.343 "data_offset": 0, 00:16:25.343 "data_size": 65536 00:16:25.343 } 00:16:25.343 ] 00:16:25.343 }' 00:16:25.343 18:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.343 18:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.600 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.600 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:25.858 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:25.858 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:26.117 [2024-07-24 18:19:34.506128] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.117 "name": "Existed_Raid", 00:16:26.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.117 "strip_size_kb": 64, 00:16:26.117 "state": "configuring", 00:16:26.117 "raid_level": "concat", 00:16:26.117 "superblock": false, 00:16:26.117 "num_base_bdevs": 4, 00:16:26.117 "num_base_bdevs_discovered": 3, 00:16:26.117 "num_base_bdevs_operational": 4, 00:16:26.117 "base_bdevs_list": [ 00:16:26.117 { 00:16:26.117 "name": null, 00:16:26.117 "uuid": "1d608663-bdbb-4aa7-9dc4-c761472f816e", 00:16:26.117 "is_configured": false, 00:16:26.117 "data_offset": 0, 00:16:26.117 "data_size": 65536 00:16:26.117 }, 00:16:26.117 { 00:16:26.117 "name": "BaseBdev2", 00:16:26.117 "uuid": "29804d17-5cea-47be-b9a2-84b6d78903f0", 00:16:26.117 "is_configured": true, 00:16:26.117 "data_offset": 0, 00:16:26.117 "data_size": 65536 00:16:26.117 }, 00:16:26.117 { 00:16:26.117 "name": "BaseBdev3", 00:16:26.117 "uuid": "b3e06b06-2574-4853-9132-97cf4012a010", 00:16:26.117 "is_configured": true, 00:16:26.117 "data_offset": 0, 00:16:26.117 "data_size": 65536 00:16:26.117 }, 00:16:26.117 { 00:16:26.117 "name": "BaseBdev4", 00:16:26.117 "uuid": "b73f3e16-3885-4990-8c68-ac52fac49bdc", 00:16:26.117 "is_configured": true, 00:16:26.117 "data_offset": 0, 00:16:26.117 "data_size": 65536 00:16:26.117 } 00:16:26.117 ] 00:16:26.117 }' 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.117 18:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.685 18:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.685 18:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:26.945 18:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:26.945 18:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:26.945 18:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.945 18:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1d608663-bdbb-4aa7-9dc4-c761472f816e 00:16:27.204 [2024-07-24 18:19:35.639856] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:27.204 [2024-07-24 18:19:35.639883] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x20f8dc0 00:16:27.204 [2024-07-24 18:19:35.639888] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:27.204 [2024-07-24 18:19:35.640014] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20fda40 00:16:27.204 [2024-07-24 18:19:35.640088] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20f8dc0 00:16:27.204 [2024-07-24 18:19:35.640094] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x20f8dc0 00:16:27.204 [2024-07-24 18:19:35.640224] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.204 NewBaseBdev 00:16:27.204 18:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:27.204 18:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:27.204 18:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.204 18:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:27.204 18:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.204 18:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.204 18:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:27.464 18:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:27.464 [ 00:16:27.464 { 00:16:27.464 "name": "NewBaseBdev", 00:16:27.464 "aliases": [ 00:16:27.464 "1d608663-bdbb-4aa7-9dc4-c761472f816e" 00:16:27.464 ], 00:16:27.464 "product_name": "Malloc disk", 00:16:27.464 "block_size": 512, 00:16:27.464 "num_blocks": 65536, 00:16:27.464 "uuid": "1d608663-bdbb-4aa7-9dc4-c761472f816e", 00:16:27.464 "assigned_rate_limits": { 00:16:27.464 "rw_ios_per_sec": 0, 00:16:27.464 "rw_mbytes_per_sec": 0, 00:16:27.464 "r_mbytes_per_sec": 0, 00:16:27.464 "w_mbytes_per_sec": 0 00:16:27.464 }, 00:16:27.464 "claimed": true, 00:16:27.464 "claim_type": "exclusive_write", 00:16:27.464 "zoned": false, 00:16:27.464 "supported_io_types": { 00:16:27.464 "read": true, 00:16:27.464 "write": true, 00:16:27.464 "unmap": true, 00:16:27.464 "flush": true, 00:16:27.464 "reset": true, 00:16:27.464 "nvme_admin": false, 00:16:27.464 "nvme_io": false, 00:16:27.464 "nvme_io_md": false, 00:16:27.464 "write_zeroes": true, 00:16:27.464 "zcopy": true, 00:16:27.464 "get_zone_info": false, 00:16:27.464 "zone_management": false, 00:16:27.464 "zone_append": false, 00:16:27.464 "compare": false, 00:16:27.464 "compare_and_write": false, 00:16:27.464 "abort": true, 00:16:27.464 "seek_hole": false, 00:16:27.464 "seek_data": false, 00:16:27.464 "copy": true, 00:16:27.464 "nvme_iov_md": false 00:16:27.464 }, 00:16:27.464 "memory_domains": [ 00:16:27.464 { 00:16:27.464 "dma_device_id": "system", 00:16:27.464 "dma_device_type": 1 00:16:27.464 }, 00:16:27.464 { 00:16:27.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.464 "dma_device_type": 2 00:16:27.464 } 00:16:27.464 ], 00:16:27.464 "driver_specific": {} 00:16:27.464 } 00:16:27.464 ] 00:16:27.464 18:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:27.464 18:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:27.464 18:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.464 18:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:27.464 18:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:27.464 18:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:27.464 18:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:27.464 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.464 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.464 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.464 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.464 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.464 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.723 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.723 "name": "Existed_Raid", 00:16:27.723 "uuid": "017a63bf-c8d6-4ba3-96e2-256c0294c146", 00:16:27.723 "strip_size_kb": 64, 00:16:27.723 "state": "online", 00:16:27.723 "raid_level": "concat", 00:16:27.723 "superblock": false, 00:16:27.723 "num_base_bdevs": 4, 00:16:27.723 "num_base_bdevs_discovered": 4, 00:16:27.723 "num_base_bdevs_operational": 4, 00:16:27.723 "base_bdevs_list": [ 00:16:27.723 { 00:16:27.723 "name": "NewBaseBdev", 00:16:27.723 "uuid": "1d608663-bdbb-4aa7-9dc4-c761472f816e", 00:16:27.723 "is_configured": true, 00:16:27.723 "data_offset": 0, 00:16:27.723 "data_size": 65536 00:16:27.723 }, 00:16:27.723 { 00:16:27.723 "name": "BaseBdev2", 00:16:27.723 "uuid": "29804d17-5cea-47be-b9a2-84b6d78903f0", 00:16:27.723 "is_configured": true, 00:16:27.723 "data_offset": 0, 00:16:27.723 "data_size": 65536 00:16:27.723 }, 00:16:27.723 { 00:16:27.724 "name": "BaseBdev3", 00:16:27.724 "uuid": "b3e06b06-2574-4853-9132-97cf4012a010", 00:16:27.724 "is_configured": true, 00:16:27.724 "data_offset": 0, 00:16:27.724 "data_size": 65536 00:16:27.724 }, 00:16:27.724 { 00:16:27.724 "name": "BaseBdev4", 00:16:27.724 "uuid": "b73f3e16-3885-4990-8c68-ac52fac49bdc", 00:16:27.724 "is_configured": true, 00:16:27.724 "data_offset": 0, 00:16:27.724 "data_size": 65536 00:16:27.724 } 00:16:27.724 ] 00:16:27.724 }' 00:16:27.724 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.724 18:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.041 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:28.041 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:28.041 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:28.041 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:28.041 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:28.041 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:28.041 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:28.041 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:28.314 [2024-07-24 18:19:36.750930] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.314 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:28.314 "name": "Existed_Raid", 00:16:28.314 "aliases": [ 00:16:28.314 "017a63bf-c8d6-4ba3-96e2-256c0294c146" 00:16:28.314 ], 00:16:28.314 "product_name": "Raid Volume", 00:16:28.314 "block_size": 512, 00:16:28.314 "num_blocks": 262144, 00:16:28.314 "uuid": "017a63bf-c8d6-4ba3-96e2-256c0294c146", 00:16:28.314 "assigned_rate_limits": { 00:16:28.314 "rw_ios_per_sec": 0, 00:16:28.314 "rw_mbytes_per_sec": 0, 00:16:28.314 "r_mbytes_per_sec": 0, 00:16:28.314 "w_mbytes_per_sec": 0 00:16:28.314 }, 00:16:28.314 "claimed": false, 00:16:28.314 "zoned": false, 00:16:28.314 "supported_io_types": { 00:16:28.314 "read": true, 00:16:28.314 "write": true, 00:16:28.314 "unmap": true, 00:16:28.314 "flush": true, 00:16:28.314 "reset": true, 00:16:28.314 "nvme_admin": false, 00:16:28.314 "nvme_io": false, 00:16:28.314 "nvme_io_md": false, 00:16:28.314 "write_zeroes": true, 00:16:28.314 "zcopy": false, 00:16:28.314 "get_zone_info": false, 00:16:28.314 "zone_management": false, 00:16:28.314 "zone_append": false, 00:16:28.314 "compare": false, 00:16:28.314 "compare_and_write": false, 00:16:28.314 "abort": false, 00:16:28.314 "seek_hole": false, 00:16:28.314 "seek_data": false, 00:16:28.314 "copy": false, 00:16:28.314 "nvme_iov_md": false 00:16:28.314 }, 00:16:28.314 "memory_domains": [ 00:16:28.314 { 00:16:28.314 "dma_device_id": "system", 00:16:28.314 "dma_device_type": 1 00:16:28.314 }, 00:16:28.314 { 00:16:28.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.314 "dma_device_type": 2 00:16:28.314 }, 00:16:28.314 { 00:16:28.314 "dma_device_id": "system", 00:16:28.314 "dma_device_type": 1 00:16:28.314 }, 00:16:28.314 { 00:16:28.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.314 "dma_device_type": 2 00:16:28.314 }, 00:16:28.314 { 00:16:28.314 "dma_device_id": "system", 00:16:28.314 "dma_device_type": 1 00:16:28.314 }, 00:16:28.314 { 00:16:28.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.314 "dma_device_type": 2 00:16:28.314 }, 00:16:28.314 { 00:16:28.314 "dma_device_id": "system", 00:16:28.314 "dma_device_type": 1 00:16:28.314 }, 00:16:28.314 { 00:16:28.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.314 "dma_device_type": 2 00:16:28.314 } 00:16:28.314 ], 00:16:28.314 "driver_specific": { 00:16:28.314 "raid": { 00:16:28.314 "uuid": "017a63bf-c8d6-4ba3-96e2-256c0294c146", 00:16:28.314 "strip_size_kb": 64, 00:16:28.314 "state": "online", 00:16:28.314 "raid_level": "concat", 00:16:28.314 "superblock": false, 00:16:28.314 "num_base_bdevs": 4, 00:16:28.314 "num_base_bdevs_discovered": 4, 00:16:28.314 "num_base_bdevs_operational": 4, 00:16:28.314 "base_bdevs_list": [ 00:16:28.314 { 00:16:28.314 "name": "NewBaseBdev", 00:16:28.314 "uuid": "1d608663-bdbb-4aa7-9dc4-c761472f816e", 00:16:28.314 "is_configured": true, 00:16:28.314 "data_offset": 0, 00:16:28.314 "data_size": 65536 00:16:28.314 }, 00:16:28.314 { 00:16:28.314 "name": "BaseBdev2", 00:16:28.314 "uuid": "29804d17-5cea-47be-b9a2-84b6d78903f0", 00:16:28.314 "is_configured": true, 00:16:28.314 "data_offset": 0, 00:16:28.314 "data_size": 65536 00:16:28.314 }, 00:16:28.314 { 00:16:28.314 "name": "BaseBdev3", 00:16:28.314 "uuid": "b3e06b06-2574-4853-9132-97cf4012a010", 00:16:28.314 "is_configured": true, 00:16:28.314 "data_offset": 0, 00:16:28.314 "data_size": 65536 00:16:28.314 }, 00:16:28.314 { 00:16:28.314 "name": "BaseBdev4", 00:16:28.314 "uuid": "b73f3e16-3885-4990-8c68-ac52fac49bdc", 00:16:28.314 "is_configured": true, 00:16:28.314 "data_offset": 0, 00:16:28.314 "data_size": 65536 00:16:28.314 } 00:16:28.314 ] 00:16:28.314 } 00:16:28.314 } 00:16:28.314 }' 00:16:28.314 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:28.314 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:28.314 BaseBdev2 00:16:28.314 BaseBdev3 00:16:28.314 BaseBdev4' 00:16:28.314 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:28.314 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:28.314 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:28.574 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:28.574 "name": "NewBaseBdev", 00:16:28.574 "aliases": [ 00:16:28.574 "1d608663-bdbb-4aa7-9dc4-c761472f816e" 00:16:28.574 ], 00:16:28.574 "product_name": "Malloc disk", 00:16:28.574 "block_size": 512, 00:16:28.574 "num_blocks": 65536, 00:16:28.574 "uuid": "1d608663-bdbb-4aa7-9dc4-c761472f816e", 00:16:28.574 "assigned_rate_limits": { 00:16:28.574 "rw_ios_per_sec": 0, 00:16:28.574 "rw_mbytes_per_sec": 0, 00:16:28.574 "r_mbytes_per_sec": 0, 00:16:28.574 "w_mbytes_per_sec": 0 00:16:28.574 }, 00:16:28.574 "claimed": true, 00:16:28.574 "claim_type": "exclusive_write", 00:16:28.574 "zoned": false, 00:16:28.574 "supported_io_types": { 00:16:28.574 "read": true, 00:16:28.574 "write": true, 00:16:28.574 "unmap": true, 00:16:28.574 "flush": true, 00:16:28.574 "reset": true, 00:16:28.574 "nvme_admin": false, 00:16:28.574 "nvme_io": false, 00:16:28.574 "nvme_io_md": false, 00:16:28.574 "write_zeroes": true, 00:16:28.574 "zcopy": true, 00:16:28.574 "get_zone_info": false, 00:16:28.574 "zone_management": false, 00:16:28.574 "zone_append": false, 00:16:28.574 "compare": false, 00:16:28.574 "compare_and_write": false, 00:16:28.574 "abort": true, 00:16:28.574 "seek_hole": false, 00:16:28.574 "seek_data": false, 00:16:28.574 "copy": true, 00:16:28.574 "nvme_iov_md": false 00:16:28.574 }, 00:16:28.574 "memory_domains": [ 00:16:28.574 { 00:16:28.574 "dma_device_id": "system", 00:16:28.574 "dma_device_type": 1 00:16:28.574 }, 00:16:28.574 { 00:16:28.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.574 "dma_device_type": 2 00:16:28.574 } 00:16:28.574 ], 00:16:28.574 "driver_specific": {} 00:16:28.574 }' 00:16:28.574 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.574 18:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.574 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:28.574 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.574 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.574 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:28.574 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.574 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.834 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:28.834 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:28.834 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:28.834 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:28.834 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:28.834 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:28.834 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:28.834 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:28.834 "name": "BaseBdev2", 00:16:28.834 "aliases": [ 00:16:28.834 "29804d17-5cea-47be-b9a2-84b6d78903f0" 00:16:28.834 ], 00:16:28.834 "product_name": "Malloc disk", 00:16:28.834 "block_size": 512, 00:16:28.834 "num_blocks": 65536, 00:16:28.834 "uuid": "29804d17-5cea-47be-b9a2-84b6d78903f0", 00:16:28.834 "assigned_rate_limits": { 00:16:28.834 "rw_ios_per_sec": 0, 00:16:28.834 "rw_mbytes_per_sec": 0, 00:16:28.834 "r_mbytes_per_sec": 0, 00:16:28.834 "w_mbytes_per_sec": 0 00:16:28.834 }, 00:16:28.834 "claimed": true, 00:16:28.834 "claim_type": "exclusive_write", 00:16:28.834 "zoned": false, 00:16:28.834 "supported_io_types": { 00:16:28.834 "read": true, 00:16:28.834 "write": true, 00:16:28.834 "unmap": true, 00:16:28.834 "flush": true, 00:16:28.834 "reset": true, 00:16:28.834 "nvme_admin": false, 00:16:28.834 "nvme_io": false, 00:16:28.834 "nvme_io_md": false, 00:16:28.834 "write_zeroes": true, 00:16:28.834 "zcopy": true, 00:16:28.834 "get_zone_info": false, 00:16:28.834 "zone_management": false, 00:16:28.834 "zone_append": false, 00:16:28.834 "compare": false, 00:16:28.834 "compare_and_write": false, 00:16:28.834 "abort": true, 00:16:28.834 "seek_hole": false, 00:16:28.834 "seek_data": false, 00:16:28.834 "copy": true, 00:16:28.834 "nvme_iov_md": false 00:16:28.834 }, 00:16:28.834 "memory_domains": [ 00:16:28.834 { 00:16:28.834 "dma_device_id": "system", 00:16:28.834 "dma_device_type": 1 00:16:28.834 }, 00:16:28.834 { 00:16:28.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.834 "dma_device_type": 2 00:16:28.834 } 00:16:28.834 ], 00:16:28.834 "driver_specific": {} 00:16:28.834 }' 00:16:28.834 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.094 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.094 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.094 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.094 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.094 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.094 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.094 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.094 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.094 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.094 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.353 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.353 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:29.353 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:29.353 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.353 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.353 "name": "BaseBdev3", 00:16:29.353 "aliases": [ 00:16:29.353 "b3e06b06-2574-4853-9132-97cf4012a010" 00:16:29.353 ], 00:16:29.353 "product_name": "Malloc disk", 00:16:29.353 "block_size": 512, 00:16:29.353 "num_blocks": 65536, 00:16:29.353 "uuid": "b3e06b06-2574-4853-9132-97cf4012a010", 00:16:29.353 "assigned_rate_limits": { 00:16:29.353 "rw_ios_per_sec": 0, 00:16:29.353 "rw_mbytes_per_sec": 0, 00:16:29.353 "r_mbytes_per_sec": 0, 00:16:29.353 "w_mbytes_per_sec": 0 00:16:29.353 }, 00:16:29.353 "claimed": true, 00:16:29.353 "claim_type": "exclusive_write", 00:16:29.353 "zoned": false, 00:16:29.353 "supported_io_types": { 00:16:29.353 "read": true, 00:16:29.353 "write": true, 00:16:29.353 "unmap": true, 00:16:29.353 "flush": true, 00:16:29.353 "reset": true, 00:16:29.353 "nvme_admin": false, 00:16:29.353 "nvme_io": false, 00:16:29.353 "nvme_io_md": false, 00:16:29.353 "write_zeroes": true, 00:16:29.353 "zcopy": true, 00:16:29.353 "get_zone_info": false, 00:16:29.353 "zone_management": false, 00:16:29.353 "zone_append": false, 00:16:29.353 "compare": false, 00:16:29.353 "compare_and_write": false, 00:16:29.353 "abort": true, 00:16:29.353 "seek_hole": false, 00:16:29.353 "seek_data": false, 00:16:29.353 "copy": true, 00:16:29.353 "nvme_iov_md": false 00:16:29.353 }, 00:16:29.353 "memory_domains": [ 00:16:29.353 { 00:16:29.353 "dma_device_id": "system", 00:16:29.353 "dma_device_type": 1 00:16:29.353 }, 00:16:29.353 { 00:16:29.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.353 "dma_device_type": 2 00:16:29.353 } 00:16:29.353 ], 00:16:29.353 "driver_specific": {} 00:16:29.353 }' 00:16:29.353 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.353 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.353 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.353 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.613 18:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.613 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.613 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.613 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.613 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.613 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.613 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.613 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.613 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:29.613 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:29.613 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.872 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.872 "name": "BaseBdev4", 00:16:29.872 "aliases": [ 00:16:29.872 "b73f3e16-3885-4990-8c68-ac52fac49bdc" 00:16:29.872 ], 00:16:29.872 "product_name": "Malloc disk", 00:16:29.872 "block_size": 512, 00:16:29.872 "num_blocks": 65536, 00:16:29.872 "uuid": "b73f3e16-3885-4990-8c68-ac52fac49bdc", 00:16:29.872 "assigned_rate_limits": { 00:16:29.872 "rw_ios_per_sec": 0, 00:16:29.872 "rw_mbytes_per_sec": 0, 00:16:29.872 "r_mbytes_per_sec": 0, 00:16:29.872 "w_mbytes_per_sec": 0 00:16:29.872 }, 00:16:29.872 "claimed": true, 00:16:29.872 "claim_type": "exclusive_write", 00:16:29.872 "zoned": false, 00:16:29.872 "supported_io_types": { 00:16:29.872 "read": true, 00:16:29.872 "write": true, 00:16:29.872 "unmap": true, 00:16:29.872 "flush": true, 00:16:29.872 "reset": true, 00:16:29.872 "nvme_admin": false, 00:16:29.872 "nvme_io": false, 00:16:29.872 "nvme_io_md": false, 00:16:29.872 "write_zeroes": true, 00:16:29.872 "zcopy": true, 00:16:29.872 "get_zone_info": false, 00:16:29.872 "zone_management": false, 00:16:29.872 "zone_append": false, 00:16:29.872 "compare": false, 00:16:29.872 "compare_and_write": false, 00:16:29.872 "abort": true, 00:16:29.872 "seek_hole": false, 00:16:29.872 "seek_data": false, 00:16:29.872 "copy": true, 00:16:29.872 "nvme_iov_md": false 00:16:29.872 }, 00:16:29.872 "memory_domains": [ 00:16:29.872 { 00:16:29.872 "dma_device_id": "system", 00:16:29.872 "dma_device_type": 1 00:16:29.872 }, 00:16:29.872 { 00:16:29.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.872 "dma_device_type": 2 00:16:29.872 } 00:16:29.872 ], 00:16:29.872 "driver_specific": {} 00:16:29.872 }' 00:16:29.872 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.872 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.872 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.872 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.131 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.131 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:30.131 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.131 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.131 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:30.131 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.131 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.131 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:30.131 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:30.391 [2024-07-24 18:19:38.816105] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.391 [2024-07-24 18:19:38.816125] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.391 [2024-07-24 18:19:38.816167] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.391 [2024-07-24 18:19:38.816211] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.391 [2024-07-24 18:19:38.816219] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20f8dc0 name Existed_Raid, state offline 00:16:30.391 18:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2226648 00:16:30.391 18:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2226648 ']' 00:16:30.391 18:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2226648 00:16:30.391 18:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:30.391 18:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.391 18:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2226648 00:16:30.391 18:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:30.391 18:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:30.391 18:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2226648' 00:16:30.391 killing process with pid 2226648 00:16:30.391 18:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2226648 00:16:30.391 [2024-07-24 18:19:38.885517] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:30.391 18:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2226648 00:16:30.391 [2024-07-24 18:19:38.915264] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:30.650 00:16:30.650 real 0m24.088s 00:16:30.650 user 0m44.012s 00:16:30.650 sys 0m4.610s 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.650 ************************************ 00:16:30.650 END TEST raid_state_function_test 00:16:30.650 ************************************ 00:16:30.650 18:19:39 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:16:30.650 18:19:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:30.650 18:19:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:30.650 18:19:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:30.650 ************************************ 00:16:30.650 START TEST raid_state_function_test_sb 00:16:30.650 ************************************ 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2231525 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2231525' 00:16:30.650 Process raid pid: 2231525 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2231525 /var/tmp/spdk-raid.sock 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2231525 ']' 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:30.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:30.650 18:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.650 [2024-07-24 18:19:39.216906] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:16:30.650 [2024-07-24 18:19:39.216952] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:01.0 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:01.1 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:01.2 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:01.3 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:01.4 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:01.5 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:01.6 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:01.7 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:02.0 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:02.1 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:02.2 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:02.3 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:02.4 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:02.5 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:02.6 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b3:02.7 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b5:01.0 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b5:01.1 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b5:01.2 cannot be used 00:16:30.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.909 EAL: Requested device 0000:b5:01.3 cannot be used 00:16:30.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.910 EAL: Requested device 0000:b5:01.4 cannot be used 00:16:30.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.910 EAL: Requested device 0000:b5:01.5 cannot be used 00:16:30.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.910 EAL: Requested device 0000:b5:01.6 cannot be used 00:16:30.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.910 EAL: Requested device 0000:b5:01.7 cannot be used 00:16:30.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.910 EAL: Requested device 0000:b5:02.0 cannot be used 00:16:30.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.910 EAL: Requested device 0000:b5:02.1 cannot be used 00:16:30.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.910 EAL: Requested device 0000:b5:02.2 cannot be used 00:16:30.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.910 EAL: Requested device 0000:b5:02.3 cannot be used 00:16:30.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.910 EAL: Requested device 0000:b5:02.4 cannot be used 00:16:30.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.910 EAL: Requested device 0000:b5:02.5 cannot be used 00:16:30.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.910 EAL: Requested device 0000:b5:02.6 cannot be used 00:16:30.910 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:30.910 EAL: Requested device 0000:b5:02.7 cannot be used 00:16:30.910 [2024-07-24 18:19:39.310355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.910 [2024-07-24 18:19:39.383831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.910 [2024-07-24 18:19:39.434336] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.910 [2024-07-24 18:19:39.434377] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.477 18:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.477 18:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:31.477 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:31.737 [2024-07-24 18:19:40.173139] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.737 [2024-07-24 18:19:40.173170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.737 [2024-07-24 18:19:40.173177] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.737 [2024-07-24 18:19:40.173186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.737 [2024-07-24 18:19:40.173191] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.737 [2024-07-24 18:19:40.173198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.737 [2024-07-24 18:19:40.173204] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:31.737 [2024-07-24 18:19:40.173211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:31.737 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:31.737 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:31.737 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:31.737 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:31.737 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:31.737 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:31.737 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.737 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.737 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.737 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.737 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.737 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.996 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.996 "name": "Existed_Raid", 00:16:31.996 "uuid": "1df4587a-e389-4441-aa91-99697dff7111", 00:16:31.996 "strip_size_kb": 64, 00:16:31.996 "state": "configuring", 00:16:31.996 "raid_level": "concat", 00:16:31.996 "superblock": true, 00:16:31.996 "num_base_bdevs": 4, 00:16:31.996 "num_base_bdevs_discovered": 0, 00:16:31.996 "num_base_bdevs_operational": 4, 00:16:31.996 "base_bdevs_list": [ 00:16:31.996 { 00:16:31.996 "name": "BaseBdev1", 00:16:31.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.996 "is_configured": false, 00:16:31.996 "data_offset": 0, 00:16:31.996 "data_size": 0 00:16:31.996 }, 00:16:31.996 { 00:16:31.996 "name": "BaseBdev2", 00:16:31.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.996 "is_configured": false, 00:16:31.996 "data_offset": 0, 00:16:31.996 "data_size": 0 00:16:31.996 }, 00:16:31.996 { 00:16:31.996 "name": "BaseBdev3", 00:16:31.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.997 "is_configured": false, 00:16:31.997 "data_offset": 0, 00:16:31.997 "data_size": 0 00:16:31.997 }, 00:16:31.997 { 00:16:31.997 "name": "BaseBdev4", 00:16:31.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.997 "is_configured": false, 00:16:31.997 "data_offset": 0, 00:16:31.997 "data_size": 0 00:16:31.997 } 00:16:31.997 ] 00:16:31.997 }' 00:16:31.997 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.997 18:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.255 18:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:32.515 [2024-07-24 18:19:40.991165] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.515 [2024-07-24 18:19:40.991186] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xaf91e0 name Existed_Raid, state configuring 00:16:32.515 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:32.774 [2024-07-24 18:19:41.159614] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:32.774 [2024-07-24 18:19:41.159634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:32.774 [2024-07-24 18:19:41.159640] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:32.774 [2024-07-24 18:19:41.159647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:32.774 [2024-07-24 18:19:41.159652] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:32.774 [2024-07-24 18:19:41.159659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:32.774 [2024-07-24 18:19:41.159664] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:32.774 [2024-07-24 18:19:41.159671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:32.774 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:32.774 [2024-07-24 18:19:41.336535] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.774 BaseBdev1 00:16:32.774 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:32.774 18:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:32.774 18:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:32.774 18:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:32.774 18:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:32.774 18:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:32.774 18:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.034 18:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:33.294 [ 00:16:33.294 { 00:16:33.294 "name": "BaseBdev1", 00:16:33.294 "aliases": [ 00:16:33.294 "17255224-bb7c-4600-9a99-431acbced444" 00:16:33.294 ], 00:16:33.294 "product_name": "Malloc disk", 00:16:33.294 "block_size": 512, 00:16:33.294 "num_blocks": 65536, 00:16:33.294 "uuid": "17255224-bb7c-4600-9a99-431acbced444", 00:16:33.294 "assigned_rate_limits": { 00:16:33.294 "rw_ios_per_sec": 0, 00:16:33.294 "rw_mbytes_per_sec": 0, 00:16:33.294 "r_mbytes_per_sec": 0, 00:16:33.294 "w_mbytes_per_sec": 0 00:16:33.294 }, 00:16:33.294 "claimed": true, 00:16:33.294 "claim_type": "exclusive_write", 00:16:33.294 "zoned": false, 00:16:33.294 "supported_io_types": { 00:16:33.294 "read": true, 00:16:33.294 "write": true, 00:16:33.294 "unmap": true, 00:16:33.294 "flush": true, 00:16:33.294 "reset": true, 00:16:33.294 "nvme_admin": false, 00:16:33.294 "nvme_io": false, 00:16:33.294 "nvme_io_md": false, 00:16:33.294 "write_zeroes": true, 00:16:33.294 "zcopy": true, 00:16:33.294 "get_zone_info": false, 00:16:33.294 "zone_management": false, 00:16:33.294 "zone_append": false, 00:16:33.294 "compare": false, 00:16:33.294 "compare_and_write": false, 00:16:33.294 "abort": true, 00:16:33.294 "seek_hole": false, 00:16:33.294 "seek_data": false, 00:16:33.294 "copy": true, 00:16:33.294 "nvme_iov_md": false 00:16:33.294 }, 00:16:33.294 "memory_domains": [ 00:16:33.294 { 00:16:33.294 "dma_device_id": "system", 00:16:33.294 "dma_device_type": 1 00:16:33.294 }, 00:16:33.294 { 00:16:33.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.294 "dma_device_type": 2 00:16:33.294 } 00:16:33.294 ], 00:16:33.294 "driver_specific": {} 00:16:33.294 } 00:16:33.294 ] 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.294 "name": "Existed_Raid", 00:16:33.294 "uuid": "a1832422-5caf-41ac-96fd-d76764492ab9", 00:16:33.294 "strip_size_kb": 64, 00:16:33.294 "state": "configuring", 00:16:33.294 "raid_level": "concat", 00:16:33.294 "superblock": true, 00:16:33.294 "num_base_bdevs": 4, 00:16:33.294 "num_base_bdevs_discovered": 1, 00:16:33.294 "num_base_bdevs_operational": 4, 00:16:33.294 "base_bdevs_list": [ 00:16:33.294 { 00:16:33.294 "name": "BaseBdev1", 00:16:33.294 "uuid": "17255224-bb7c-4600-9a99-431acbced444", 00:16:33.294 "is_configured": true, 00:16:33.294 "data_offset": 2048, 00:16:33.294 "data_size": 63488 00:16:33.294 }, 00:16:33.294 { 00:16:33.294 "name": "BaseBdev2", 00:16:33.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.294 "is_configured": false, 00:16:33.294 "data_offset": 0, 00:16:33.294 "data_size": 0 00:16:33.294 }, 00:16:33.294 { 00:16:33.294 "name": "BaseBdev3", 00:16:33.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.294 "is_configured": false, 00:16:33.294 "data_offset": 0, 00:16:33.294 "data_size": 0 00:16:33.294 }, 00:16:33.294 { 00:16:33.294 "name": "BaseBdev4", 00:16:33.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.294 "is_configured": false, 00:16:33.294 "data_offset": 0, 00:16:33.294 "data_size": 0 00:16:33.294 } 00:16:33.294 ] 00:16:33.294 }' 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.294 18:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.864 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:33.864 [2024-07-24 18:19:42.411300] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.864 [2024-07-24 18:19:42.411328] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xaf8a50 name Existed_Raid, state configuring 00:16:33.864 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:34.123 [2024-07-24 18:19:42.579768] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.123 [2024-07-24 18:19:42.580799] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.123 [2024-07-24 18:19:42.580825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.123 [2024-07-24 18:19:42.580831] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.123 [2024-07-24 18:19:42.580842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.123 [2024-07-24 18:19:42.580847] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:34.123 [2024-07-24 18:19:42.580854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.123 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.383 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.383 "name": "Existed_Raid", 00:16:34.383 "uuid": "ea55d5a6-664e-4818-a96f-2d87d4c0e968", 00:16:34.383 "strip_size_kb": 64, 00:16:34.383 "state": "configuring", 00:16:34.383 "raid_level": "concat", 00:16:34.383 "superblock": true, 00:16:34.383 "num_base_bdevs": 4, 00:16:34.383 "num_base_bdevs_discovered": 1, 00:16:34.383 "num_base_bdevs_operational": 4, 00:16:34.383 "base_bdevs_list": [ 00:16:34.383 { 00:16:34.383 "name": "BaseBdev1", 00:16:34.383 "uuid": "17255224-bb7c-4600-9a99-431acbced444", 00:16:34.383 "is_configured": true, 00:16:34.383 "data_offset": 2048, 00:16:34.383 "data_size": 63488 00:16:34.383 }, 00:16:34.383 { 00:16:34.383 "name": "BaseBdev2", 00:16:34.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.383 "is_configured": false, 00:16:34.383 "data_offset": 0, 00:16:34.383 "data_size": 0 00:16:34.383 }, 00:16:34.383 { 00:16:34.383 "name": "BaseBdev3", 00:16:34.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.383 "is_configured": false, 00:16:34.383 "data_offset": 0, 00:16:34.383 "data_size": 0 00:16:34.383 }, 00:16:34.383 { 00:16:34.383 "name": "BaseBdev4", 00:16:34.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.383 "is_configured": false, 00:16:34.383 "data_offset": 0, 00:16:34.383 "data_size": 0 00:16:34.383 } 00:16:34.383 ] 00:16:34.383 }' 00:16:34.383 18:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.383 18:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.642 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:34.902 [2024-07-24 18:19:43.360539] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.902 BaseBdev2 00:16:34.902 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:34.902 18:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:34.902 18:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:34.902 18:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:34.902 18:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:34.902 18:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:34.902 18:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.162 18:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:35.162 [ 00:16:35.162 { 00:16:35.162 "name": "BaseBdev2", 00:16:35.162 "aliases": [ 00:16:35.162 "eb7acbaa-8912-48dd-9e6d-173251f51ff0" 00:16:35.162 ], 00:16:35.162 "product_name": "Malloc disk", 00:16:35.162 "block_size": 512, 00:16:35.162 "num_blocks": 65536, 00:16:35.162 "uuid": "eb7acbaa-8912-48dd-9e6d-173251f51ff0", 00:16:35.162 "assigned_rate_limits": { 00:16:35.162 "rw_ios_per_sec": 0, 00:16:35.162 "rw_mbytes_per_sec": 0, 00:16:35.162 "r_mbytes_per_sec": 0, 00:16:35.162 "w_mbytes_per_sec": 0 00:16:35.162 }, 00:16:35.162 "claimed": true, 00:16:35.162 "claim_type": "exclusive_write", 00:16:35.162 "zoned": false, 00:16:35.162 "supported_io_types": { 00:16:35.162 "read": true, 00:16:35.162 "write": true, 00:16:35.162 "unmap": true, 00:16:35.162 "flush": true, 00:16:35.162 "reset": true, 00:16:35.162 "nvme_admin": false, 00:16:35.162 "nvme_io": false, 00:16:35.162 "nvme_io_md": false, 00:16:35.162 "write_zeroes": true, 00:16:35.162 "zcopy": true, 00:16:35.162 "get_zone_info": false, 00:16:35.162 "zone_management": false, 00:16:35.162 "zone_append": false, 00:16:35.162 "compare": false, 00:16:35.162 "compare_and_write": false, 00:16:35.162 "abort": true, 00:16:35.162 "seek_hole": false, 00:16:35.163 "seek_data": false, 00:16:35.163 "copy": true, 00:16:35.163 "nvme_iov_md": false 00:16:35.163 }, 00:16:35.163 "memory_domains": [ 00:16:35.163 { 00:16:35.163 "dma_device_id": "system", 00:16:35.163 "dma_device_type": 1 00:16:35.163 }, 00:16:35.163 { 00:16:35.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.163 "dma_device_type": 2 00:16:35.163 } 00:16:35.163 ], 00:16:35.163 "driver_specific": {} 00:16:35.163 } 00:16:35.163 ] 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.163 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.422 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.422 "name": "Existed_Raid", 00:16:35.422 "uuid": "ea55d5a6-664e-4818-a96f-2d87d4c0e968", 00:16:35.422 "strip_size_kb": 64, 00:16:35.422 "state": "configuring", 00:16:35.422 "raid_level": "concat", 00:16:35.422 "superblock": true, 00:16:35.422 "num_base_bdevs": 4, 00:16:35.422 "num_base_bdevs_discovered": 2, 00:16:35.422 "num_base_bdevs_operational": 4, 00:16:35.422 "base_bdevs_list": [ 00:16:35.422 { 00:16:35.422 "name": "BaseBdev1", 00:16:35.422 "uuid": "17255224-bb7c-4600-9a99-431acbced444", 00:16:35.422 "is_configured": true, 00:16:35.422 "data_offset": 2048, 00:16:35.422 "data_size": 63488 00:16:35.422 }, 00:16:35.422 { 00:16:35.422 "name": "BaseBdev2", 00:16:35.422 "uuid": "eb7acbaa-8912-48dd-9e6d-173251f51ff0", 00:16:35.422 "is_configured": true, 00:16:35.422 "data_offset": 2048, 00:16:35.422 "data_size": 63488 00:16:35.422 }, 00:16:35.422 { 00:16:35.422 "name": "BaseBdev3", 00:16:35.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.422 "is_configured": false, 00:16:35.422 "data_offset": 0, 00:16:35.422 "data_size": 0 00:16:35.422 }, 00:16:35.422 { 00:16:35.422 "name": "BaseBdev4", 00:16:35.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.422 "is_configured": false, 00:16:35.422 "data_offset": 0, 00:16:35.422 "data_size": 0 00:16:35.422 } 00:16:35.422 ] 00:16:35.422 }' 00:16:35.422 18:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.422 18:19:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.990 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:35.990 [2024-07-24 18:19:44.454115] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:35.990 BaseBdev3 00:16:35.990 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:35.990 18:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:35.990 18:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:35.990 18:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:35.990 18:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:35.990 18:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:35.990 18:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:36.249 [ 00:16:36.249 { 00:16:36.249 "name": "BaseBdev3", 00:16:36.249 "aliases": [ 00:16:36.249 "608985fb-025d-451d-8439-9b171541e677" 00:16:36.249 ], 00:16:36.249 "product_name": "Malloc disk", 00:16:36.249 "block_size": 512, 00:16:36.249 "num_blocks": 65536, 00:16:36.249 "uuid": "608985fb-025d-451d-8439-9b171541e677", 00:16:36.249 "assigned_rate_limits": { 00:16:36.249 "rw_ios_per_sec": 0, 00:16:36.249 "rw_mbytes_per_sec": 0, 00:16:36.249 "r_mbytes_per_sec": 0, 00:16:36.249 "w_mbytes_per_sec": 0 00:16:36.249 }, 00:16:36.249 "claimed": true, 00:16:36.249 "claim_type": "exclusive_write", 00:16:36.249 "zoned": false, 00:16:36.249 "supported_io_types": { 00:16:36.249 "read": true, 00:16:36.249 "write": true, 00:16:36.249 "unmap": true, 00:16:36.249 "flush": true, 00:16:36.249 "reset": true, 00:16:36.249 "nvme_admin": false, 00:16:36.249 "nvme_io": false, 00:16:36.249 "nvme_io_md": false, 00:16:36.249 "write_zeroes": true, 00:16:36.249 "zcopy": true, 00:16:36.249 "get_zone_info": false, 00:16:36.249 "zone_management": false, 00:16:36.249 "zone_append": false, 00:16:36.249 "compare": false, 00:16:36.249 "compare_and_write": false, 00:16:36.249 "abort": true, 00:16:36.249 "seek_hole": false, 00:16:36.249 "seek_data": false, 00:16:36.249 "copy": true, 00:16:36.249 "nvme_iov_md": false 00:16:36.249 }, 00:16:36.249 "memory_domains": [ 00:16:36.249 { 00:16:36.249 "dma_device_id": "system", 00:16:36.249 "dma_device_type": 1 00:16:36.249 }, 00:16:36.249 { 00:16:36.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.249 "dma_device_type": 2 00:16:36.249 } 00:16:36.249 ], 00:16:36.249 "driver_specific": {} 00:16:36.249 } 00:16:36.249 ] 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.249 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.509 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.509 "name": "Existed_Raid", 00:16:36.509 "uuid": "ea55d5a6-664e-4818-a96f-2d87d4c0e968", 00:16:36.509 "strip_size_kb": 64, 00:16:36.509 "state": "configuring", 00:16:36.509 "raid_level": "concat", 00:16:36.509 "superblock": true, 00:16:36.509 "num_base_bdevs": 4, 00:16:36.509 "num_base_bdevs_discovered": 3, 00:16:36.509 "num_base_bdevs_operational": 4, 00:16:36.509 "base_bdevs_list": [ 00:16:36.509 { 00:16:36.509 "name": "BaseBdev1", 00:16:36.509 "uuid": "17255224-bb7c-4600-9a99-431acbced444", 00:16:36.509 "is_configured": true, 00:16:36.509 "data_offset": 2048, 00:16:36.509 "data_size": 63488 00:16:36.509 }, 00:16:36.509 { 00:16:36.509 "name": "BaseBdev2", 00:16:36.509 "uuid": "eb7acbaa-8912-48dd-9e6d-173251f51ff0", 00:16:36.509 "is_configured": true, 00:16:36.509 "data_offset": 2048, 00:16:36.509 "data_size": 63488 00:16:36.509 }, 00:16:36.509 { 00:16:36.509 "name": "BaseBdev3", 00:16:36.509 "uuid": "608985fb-025d-451d-8439-9b171541e677", 00:16:36.509 "is_configured": true, 00:16:36.509 "data_offset": 2048, 00:16:36.509 "data_size": 63488 00:16:36.509 }, 00:16:36.509 { 00:16:36.509 "name": "BaseBdev4", 00:16:36.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.509 "is_configured": false, 00:16:36.509 "data_offset": 0, 00:16:36.509 "data_size": 0 00:16:36.509 } 00:16:36.509 ] 00:16:36.509 }' 00:16:36.509 18:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.509 18:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.077 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:37.077 [2024-07-24 18:19:45.631852] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:37.077 [2024-07-24 18:19:45.631967] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xaf9ab0 00:16:37.077 [2024-07-24 18:19:45.631976] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:37.077 [2024-07-24 18:19:45.632097] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcaccd0 00:16:37.077 [2024-07-24 18:19:45.632173] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xaf9ab0 00:16:37.077 [2024-07-24 18:19:45.632179] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xaf9ab0 00:16:37.077 [2024-07-24 18:19:45.632236] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.077 BaseBdev4 00:16:37.077 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:37.077 18:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:37.077 18:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:37.077 18:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:37.077 18:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:37.077 18:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:37.077 18:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.336 18:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:37.596 [ 00:16:37.596 { 00:16:37.596 "name": "BaseBdev4", 00:16:37.596 "aliases": [ 00:16:37.596 "86f62118-2495-4f92-b625-c51be2fdb8d2" 00:16:37.596 ], 00:16:37.596 "product_name": "Malloc disk", 00:16:37.596 "block_size": 512, 00:16:37.596 "num_blocks": 65536, 00:16:37.596 "uuid": "86f62118-2495-4f92-b625-c51be2fdb8d2", 00:16:37.596 "assigned_rate_limits": { 00:16:37.596 "rw_ios_per_sec": 0, 00:16:37.596 "rw_mbytes_per_sec": 0, 00:16:37.596 "r_mbytes_per_sec": 0, 00:16:37.596 "w_mbytes_per_sec": 0 00:16:37.596 }, 00:16:37.596 "claimed": true, 00:16:37.596 "claim_type": "exclusive_write", 00:16:37.596 "zoned": false, 00:16:37.596 "supported_io_types": { 00:16:37.596 "read": true, 00:16:37.596 "write": true, 00:16:37.596 "unmap": true, 00:16:37.596 "flush": true, 00:16:37.596 "reset": true, 00:16:37.596 "nvme_admin": false, 00:16:37.596 "nvme_io": false, 00:16:37.596 "nvme_io_md": false, 00:16:37.596 "write_zeroes": true, 00:16:37.596 "zcopy": true, 00:16:37.596 "get_zone_info": false, 00:16:37.596 "zone_management": false, 00:16:37.596 "zone_append": false, 00:16:37.596 "compare": false, 00:16:37.596 "compare_and_write": false, 00:16:37.596 "abort": true, 00:16:37.596 "seek_hole": false, 00:16:37.596 "seek_data": false, 00:16:37.596 "copy": true, 00:16:37.596 "nvme_iov_md": false 00:16:37.596 }, 00:16:37.596 "memory_domains": [ 00:16:37.596 { 00:16:37.596 "dma_device_id": "system", 00:16:37.596 "dma_device_type": 1 00:16:37.596 }, 00:16:37.596 { 00:16:37.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.596 "dma_device_type": 2 00:16:37.596 } 00:16:37.596 ], 00:16:37.596 "driver_specific": {} 00:16:37.596 } 00:16:37.596 ] 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.596 18:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.596 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.596 "name": "Existed_Raid", 00:16:37.596 "uuid": "ea55d5a6-664e-4818-a96f-2d87d4c0e968", 00:16:37.596 "strip_size_kb": 64, 00:16:37.596 "state": "online", 00:16:37.596 "raid_level": "concat", 00:16:37.596 "superblock": true, 00:16:37.596 "num_base_bdevs": 4, 00:16:37.596 "num_base_bdevs_discovered": 4, 00:16:37.596 "num_base_bdevs_operational": 4, 00:16:37.596 "base_bdevs_list": [ 00:16:37.596 { 00:16:37.596 "name": "BaseBdev1", 00:16:37.596 "uuid": "17255224-bb7c-4600-9a99-431acbced444", 00:16:37.596 "is_configured": true, 00:16:37.596 "data_offset": 2048, 00:16:37.596 "data_size": 63488 00:16:37.596 }, 00:16:37.596 { 00:16:37.596 "name": "BaseBdev2", 00:16:37.596 "uuid": "eb7acbaa-8912-48dd-9e6d-173251f51ff0", 00:16:37.596 "is_configured": true, 00:16:37.597 "data_offset": 2048, 00:16:37.597 "data_size": 63488 00:16:37.597 }, 00:16:37.597 { 00:16:37.597 "name": "BaseBdev3", 00:16:37.597 "uuid": "608985fb-025d-451d-8439-9b171541e677", 00:16:37.597 "is_configured": true, 00:16:37.597 "data_offset": 2048, 00:16:37.597 "data_size": 63488 00:16:37.597 }, 00:16:37.597 { 00:16:37.597 "name": "BaseBdev4", 00:16:37.597 "uuid": "86f62118-2495-4f92-b625-c51be2fdb8d2", 00:16:37.597 "is_configured": true, 00:16:37.597 "data_offset": 2048, 00:16:37.597 "data_size": 63488 00:16:37.597 } 00:16:37.597 ] 00:16:37.597 }' 00:16:37.597 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.597 18:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.166 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:38.166 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:38.166 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:38.166 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:38.166 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:38.166 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:38.166 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:38.166 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:38.166 [2024-07-24 18:19:46.758990] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.426 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:38.426 "name": "Existed_Raid", 00:16:38.426 "aliases": [ 00:16:38.426 "ea55d5a6-664e-4818-a96f-2d87d4c0e968" 00:16:38.426 ], 00:16:38.426 "product_name": "Raid Volume", 00:16:38.426 "block_size": 512, 00:16:38.426 "num_blocks": 253952, 00:16:38.426 "uuid": "ea55d5a6-664e-4818-a96f-2d87d4c0e968", 00:16:38.426 "assigned_rate_limits": { 00:16:38.426 "rw_ios_per_sec": 0, 00:16:38.426 "rw_mbytes_per_sec": 0, 00:16:38.426 "r_mbytes_per_sec": 0, 00:16:38.426 "w_mbytes_per_sec": 0 00:16:38.426 }, 00:16:38.426 "claimed": false, 00:16:38.426 "zoned": false, 00:16:38.426 "supported_io_types": { 00:16:38.426 "read": true, 00:16:38.426 "write": true, 00:16:38.426 "unmap": true, 00:16:38.426 "flush": true, 00:16:38.426 "reset": true, 00:16:38.426 "nvme_admin": false, 00:16:38.426 "nvme_io": false, 00:16:38.426 "nvme_io_md": false, 00:16:38.426 "write_zeroes": true, 00:16:38.426 "zcopy": false, 00:16:38.426 "get_zone_info": false, 00:16:38.426 "zone_management": false, 00:16:38.426 "zone_append": false, 00:16:38.426 "compare": false, 00:16:38.426 "compare_and_write": false, 00:16:38.426 "abort": false, 00:16:38.426 "seek_hole": false, 00:16:38.426 "seek_data": false, 00:16:38.426 "copy": false, 00:16:38.426 "nvme_iov_md": false 00:16:38.426 }, 00:16:38.426 "memory_domains": [ 00:16:38.426 { 00:16:38.426 "dma_device_id": "system", 00:16:38.426 "dma_device_type": 1 00:16:38.426 }, 00:16:38.426 { 00:16:38.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.427 "dma_device_type": 2 00:16:38.427 }, 00:16:38.427 { 00:16:38.427 "dma_device_id": "system", 00:16:38.427 "dma_device_type": 1 00:16:38.427 }, 00:16:38.427 { 00:16:38.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.427 "dma_device_type": 2 00:16:38.427 }, 00:16:38.427 { 00:16:38.427 "dma_device_id": "system", 00:16:38.427 "dma_device_type": 1 00:16:38.427 }, 00:16:38.427 { 00:16:38.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.427 "dma_device_type": 2 00:16:38.427 }, 00:16:38.427 { 00:16:38.427 "dma_device_id": "system", 00:16:38.427 "dma_device_type": 1 00:16:38.427 }, 00:16:38.427 { 00:16:38.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.427 "dma_device_type": 2 00:16:38.427 } 00:16:38.427 ], 00:16:38.427 "driver_specific": { 00:16:38.427 "raid": { 00:16:38.427 "uuid": "ea55d5a6-664e-4818-a96f-2d87d4c0e968", 00:16:38.427 "strip_size_kb": 64, 00:16:38.427 "state": "online", 00:16:38.427 "raid_level": "concat", 00:16:38.427 "superblock": true, 00:16:38.427 "num_base_bdevs": 4, 00:16:38.427 "num_base_bdevs_discovered": 4, 00:16:38.427 "num_base_bdevs_operational": 4, 00:16:38.427 "base_bdevs_list": [ 00:16:38.427 { 00:16:38.427 "name": "BaseBdev1", 00:16:38.427 "uuid": "17255224-bb7c-4600-9a99-431acbced444", 00:16:38.427 "is_configured": true, 00:16:38.427 "data_offset": 2048, 00:16:38.427 "data_size": 63488 00:16:38.427 }, 00:16:38.427 { 00:16:38.427 "name": "BaseBdev2", 00:16:38.427 "uuid": "eb7acbaa-8912-48dd-9e6d-173251f51ff0", 00:16:38.427 "is_configured": true, 00:16:38.427 "data_offset": 2048, 00:16:38.427 "data_size": 63488 00:16:38.427 }, 00:16:38.427 { 00:16:38.427 "name": "BaseBdev3", 00:16:38.427 "uuid": "608985fb-025d-451d-8439-9b171541e677", 00:16:38.427 "is_configured": true, 00:16:38.427 "data_offset": 2048, 00:16:38.427 "data_size": 63488 00:16:38.427 }, 00:16:38.427 { 00:16:38.427 "name": "BaseBdev4", 00:16:38.427 "uuid": "86f62118-2495-4f92-b625-c51be2fdb8d2", 00:16:38.427 "is_configured": true, 00:16:38.427 "data_offset": 2048, 00:16:38.427 "data_size": 63488 00:16:38.427 } 00:16:38.427 ] 00:16:38.427 } 00:16:38.427 } 00:16:38.427 }' 00:16:38.427 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.427 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:38.427 BaseBdev2 00:16:38.427 BaseBdev3 00:16:38.427 BaseBdev4' 00:16:38.427 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:38.427 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:38.427 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:38.427 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:38.427 "name": "BaseBdev1", 00:16:38.427 "aliases": [ 00:16:38.427 "17255224-bb7c-4600-9a99-431acbced444" 00:16:38.427 ], 00:16:38.427 "product_name": "Malloc disk", 00:16:38.427 "block_size": 512, 00:16:38.427 "num_blocks": 65536, 00:16:38.427 "uuid": "17255224-bb7c-4600-9a99-431acbced444", 00:16:38.427 "assigned_rate_limits": { 00:16:38.427 "rw_ios_per_sec": 0, 00:16:38.427 "rw_mbytes_per_sec": 0, 00:16:38.427 "r_mbytes_per_sec": 0, 00:16:38.427 "w_mbytes_per_sec": 0 00:16:38.427 }, 00:16:38.427 "claimed": true, 00:16:38.427 "claim_type": "exclusive_write", 00:16:38.427 "zoned": false, 00:16:38.427 "supported_io_types": { 00:16:38.427 "read": true, 00:16:38.427 "write": true, 00:16:38.427 "unmap": true, 00:16:38.427 "flush": true, 00:16:38.427 "reset": true, 00:16:38.427 "nvme_admin": false, 00:16:38.427 "nvme_io": false, 00:16:38.427 "nvme_io_md": false, 00:16:38.427 "write_zeroes": true, 00:16:38.427 "zcopy": true, 00:16:38.427 "get_zone_info": false, 00:16:38.427 "zone_management": false, 00:16:38.427 "zone_append": false, 00:16:38.427 "compare": false, 00:16:38.427 "compare_and_write": false, 00:16:38.427 "abort": true, 00:16:38.427 "seek_hole": false, 00:16:38.427 "seek_data": false, 00:16:38.427 "copy": true, 00:16:38.427 "nvme_iov_md": false 00:16:38.427 }, 00:16:38.427 "memory_domains": [ 00:16:38.427 { 00:16:38.427 "dma_device_id": "system", 00:16:38.427 "dma_device_type": 1 00:16:38.427 }, 00:16:38.427 { 00:16:38.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.427 "dma_device_type": 2 00:16:38.427 } 00:16:38.427 ], 00:16:38.427 "driver_specific": {} 00:16:38.427 }' 00:16:38.427 18:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:38.687 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:38.946 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:38.947 "name": "BaseBdev2", 00:16:38.947 "aliases": [ 00:16:38.947 "eb7acbaa-8912-48dd-9e6d-173251f51ff0" 00:16:38.947 ], 00:16:38.947 "product_name": "Malloc disk", 00:16:38.947 "block_size": 512, 00:16:38.947 "num_blocks": 65536, 00:16:38.947 "uuid": "eb7acbaa-8912-48dd-9e6d-173251f51ff0", 00:16:38.947 "assigned_rate_limits": { 00:16:38.947 "rw_ios_per_sec": 0, 00:16:38.947 "rw_mbytes_per_sec": 0, 00:16:38.947 "r_mbytes_per_sec": 0, 00:16:38.947 "w_mbytes_per_sec": 0 00:16:38.947 }, 00:16:38.947 "claimed": true, 00:16:38.947 "claim_type": "exclusive_write", 00:16:38.947 "zoned": false, 00:16:38.947 "supported_io_types": { 00:16:38.947 "read": true, 00:16:38.947 "write": true, 00:16:38.947 "unmap": true, 00:16:38.947 "flush": true, 00:16:38.947 "reset": true, 00:16:38.947 "nvme_admin": false, 00:16:38.947 "nvme_io": false, 00:16:38.947 "nvme_io_md": false, 00:16:38.947 "write_zeroes": true, 00:16:38.947 "zcopy": true, 00:16:38.947 "get_zone_info": false, 00:16:38.947 "zone_management": false, 00:16:38.947 "zone_append": false, 00:16:38.947 "compare": false, 00:16:38.947 "compare_and_write": false, 00:16:38.947 "abort": true, 00:16:38.947 "seek_hole": false, 00:16:38.947 "seek_data": false, 00:16:38.947 "copy": true, 00:16:38.947 "nvme_iov_md": false 00:16:38.947 }, 00:16:38.947 "memory_domains": [ 00:16:38.947 { 00:16:38.947 "dma_device_id": "system", 00:16:38.947 "dma_device_type": 1 00:16:38.947 }, 00:16:38.947 { 00:16:38.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.947 "dma_device_type": 2 00:16:38.947 } 00:16:38.947 ], 00:16:38.947 "driver_specific": {} 00:16:38.947 }' 00:16:38.947 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:38.947 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:38.947 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:38.947 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:38.947 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:38.947 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:38.947 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.206 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.206 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:39.206 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.206 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.206 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:39.206 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:39.206 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:39.206 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:39.465 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:39.465 "name": "BaseBdev3", 00:16:39.465 "aliases": [ 00:16:39.465 "608985fb-025d-451d-8439-9b171541e677" 00:16:39.465 ], 00:16:39.465 "product_name": "Malloc disk", 00:16:39.465 "block_size": 512, 00:16:39.465 "num_blocks": 65536, 00:16:39.465 "uuid": "608985fb-025d-451d-8439-9b171541e677", 00:16:39.465 "assigned_rate_limits": { 00:16:39.465 "rw_ios_per_sec": 0, 00:16:39.465 "rw_mbytes_per_sec": 0, 00:16:39.465 "r_mbytes_per_sec": 0, 00:16:39.465 "w_mbytes_per_sec": 0 00:16:39.465 }, 00:16:39.465 "claimed": true, 00:16:39.465 "claim_type": "exclusive_write", 00:16:39.465 "zoned": false, 00:16:39.465 "supported_io_types": { 00:16:39.465 "read": true, 00:16:39.465 "write": true, 00:16:39.465 "unmap": true, 00:16:39.465 "flush": true, 00:16:39.465 "reset": true, 00:16:39.465 "nvme_admin": false, 00:16:39.465 "nvme_io": false, 00:16:39.465 "nvme_io_md": false, 00:16:39.465 "write_zeroes": true, 00:16:39.465 "zcopy": true, 00:16:39.465 "get_zone_info": false, 00:16:39.465 "zone_management": false, 00:16:39.465 "zone_append": false, 00:16:39.465 "compare": false, 00:16:39.465 "compare_and_write": false, 00:16:39.465 "abort": true, 00:16:39.465 "seek_hole": false, 00:16:39.465 "seek_data": false, 00:16:39.465 "copy": true, 00:16:39.465 "nvme_iov_md": false 00:16:39.465 }, 00:16:39.465 "memory_domains": [ 00:16:39.465 { 00:16:39.465 "dma_device_id": "system", 00:16:39.465 "dma_device_type": 1 00:16:39.465 }, 00:16:39.465 { 00:16:39.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.465 "dma_device_type": 2 00:16:39.465 } 00:16:39.465 ], 00:16:39.465 "driver_specific": {} 00:16:39.465 }' 00:16:39.465 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.465 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.465 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:39.465 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.465 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.465 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:39.465 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.465 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.465 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:39.465 18:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.465 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.465 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:39.465 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:39.465 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:39.465 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:39.724 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:39.724 "name": "BaseBdev4", 00:16:39.724 "aliases": [ 00:16:39.724 "86f62118-2495-4f92-b625-c51be2fdb8d2" 00:16:39.724 ], 00:16:39.724 "product_name": "Malloc disk", 00:16:39.724 "block_size": 512, 00:16:39.724 "num_blocks": 65536, 00:16:39.724 "uuid": "86f62118-2495-4f92-b625-c51be2fdb8d2", 00:16:39.724 "assigned_rate_limits": { 00:16:39.724 "rw_ios_per_sec": 0, 00:16:39.724 "rw_mbytes_per_sec": 0, 00:16:39.724 "r_mbytes_per_sec": 0, 00:16:39.724 "w_mbytes_per_sec": 0 00:16:39.724 }, 00:16:39.724 "claimed": true, 00:16:39.724 "claim_type": "exclusive_write", 00:16:39.724 "zoned": false, 00:16:39.724 "supported_io_types": { 00:16:39.725 "read": true, 00:16:39.725 "write": true, 00:16:39.725 "unmap": true, 00:16:39.725 "flush": true, 00:16:39.725 "reset": true, 00:16:39.725 "nvme_admin": false, 00:16:39.725 "nvme_io": false, 00:16:39.725 "nvme_io_md": false, 00:16:39.725 "write_zeroes": true, 00:16:39.725 "zcopy": true, 00:16:39.725 "get_zone_info": false, 00:16:39.725 "zone_management": false, 00:16:39.725 "zone_append": false, 00:16:39.725 "compare": false, 00:16:39.725 "compare_and_write": false, 00:16:39.725 "abort": true, 00:16:39.725 "seek_hole": false, 00:16:39.725 "seek_data": false, 00:16:39.725 "copy": true, 00:16:39.725 "nvme_iov_md": false 00:16:39.725 }, 00:16:39.725 "memory_domains": [ 00:16:39.725 { 00:16:39.725 "dma_device_id": "system", 00:16:39.725 "dma_device_type": 1 00:16:39.725 }, 00:16:39.725 { 00:16:39.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.725 "dma_device_type": 2 00:16:39.725 } 00:16:39.725 ], 00:16:39.725 "driver_specific": {} 00:16:39.725 }' 00:16:39.725 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.725 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.725 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:39.725 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.725 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.984 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:39.984 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.984 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.984 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:39.984 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.984 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.984 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:39.984 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:40.243 [2024-07-24 18:19:48.623596] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:40.243 [2024-07-24 18:19:48.623616] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.243 [2024-07-24 18:19:48.623653] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.243 "name": "Existed_Raid", 00:16:40.243 "uuid": "ea55d5a6-664e-4818-a96f-2d87d4c0e968", 00:16:40.243 "strip_size_kb": 64, 00:16:40.243 "state": "offline", 00:16:40.243 "raid_level": "concat", 00:16:40.243 "superblock": true, 00:16:40.243 "num_base_bdevs": 4, 00:16:40.243 "num_base_bdevs_discovered": 3, 00:16:40.243 "num_base_bdevs_operational": 3, 00:16:40.243 "base_bdevs_list": [ 00:16:40.243 { 00:16:40.243 "name": null, 00:16:40.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.243 "is_configured": false, 00:16:40.243 "data_offset": 2048, 00:16:40.243 "data_size": 63488 00:16:40.243 }, 00:16:40.243 { 00:16:40.243 "name": "BaseBdev2", 00:16:40.243 "uuid": "eb7acbaa-8912-48dd-9e6d-173251f51ff0", 00:16:40.243 "is_configured": true, 00:16:40.243 "data_offset": 2048, 00:16:40.243 "data_size": 63488 00:16:40.243 }, 00:16:40.243 { 00:16:40.243 "name": "BaseBdev3", 00:16:40.243 "uuid": "608985fb-025d-451d-8439-9b171541e677", 00:16:40.243 "is_configured": true, 00:16:40.243 "data_offset": 2048, 00:16:40.243 "data_size": 63488 00:16:40.243 }, 00:16:40.243 { 00:16:40.243 "name": "BaseBdev4", 00:16:40.243 "uuid": "86f62118-2495-4f92-b625-c51be2fdb8d2", 00:16:40.243 "is_configured": true, 00:16:40.243 "data_offset": 2048, 00:16:40.243 "data_size": 63488 00:16:40.243 } 00:16:40.243 ] 00:16:40.243 }' 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.243 18:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.812 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:40.812 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:40.812 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.812 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:41.071 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:41.071 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:41.071 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:41.071 [2024-07-24 18:19:49.598999] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:41.071 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:41.071 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:41.071 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.071 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:41.373 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:41.373 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:41.373 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:41.634 [2024-07-24 18:19:49.933461] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:41.634 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:41.634 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:41.634 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.634 18:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:41.634 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:41.634 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:41.634 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:41.893 [2024-07-24 18:19:50.263851] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:41.893 [2024-07-24 18:19:50.263884] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xaf9ab0 name Existed_Raid, state offline 00:16:41.893 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:41.893 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:41.893 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.893 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:41.893 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:41.893 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:41.893 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:41.893 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:41.893 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:41.893 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:42.153 BaseBdev2 00:16:42.153 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:42.153 18:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:42.153 18:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:42.153 18:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:42.153 18:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:42.153 18:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:42.153 18:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:42.413 18:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:42.413 [ 00:16:42.413 { 00:16:42.413 "name": "BaseBdev2", 00:16:42.413 "aliases": [ 00:16:42.413 "8a573068-6c12-4d62-91f4-00a85e71c8f3" 00:16:42.413 ], 00:16:42.413 "product_name": "Malloc disk", 00:16:42.413 "block_size": 512, 00:16:42.413 "num_blocks": 65536, 00:16:42.413 "uuid": "8a573068-6c12-4d62-91f4-00a85e71c8f3", 00:16:42.413 "assigned_rate_limits": { 00:16:42.413 "rw_ios_per_sec": 0, 00:16:42.413 "rw_mbytes_per_sec": 0, 00:16:42.413 "r_mbytes_per_sec": 0, 00:16:42.413 "w_mbytes_per_sec": 0 00:16:42.413 }, 00:16:42.413 "claimed": false, 00:16:42.413 "zoned": false, 00:16:42.413 "supported_io_types": { 00:16:42.413 "read": true, 00:16:42.413 "write": true, 00:16:42.413 "unmap": true, 00:16:42.413 "flush": true, 00:16:42.413 "reset": true, 00:16:42.413 "nvme_admin": false, 00:16:42.413 "nvme_io": false, 00:16:42.413 "nvme_io_md": false, 00:16:42.413 "write_zeroes": true, 00:16:42.413 "zcopy": true, 00:16:42.413 "get_zone_info": false, 00:16:42.413 "zone_management": false, 00:16:42.413 "zone_append": false, 00:16:42.413 "compare": false, 00:16:42.413 "compare_and_write": false, 00:16:42.413 "abort": true, 00:16:42.413 "seek_hole": false, 00:16:42.413 "seek_data": false, 00:16:42.413 "copy": true, 00:16:42.413 "nvme_iov_md": false 00:16:42.413 }, 00:16:42.413 "memory_domains": [ 00:16:42.413 { 00:16:42.413 "dma_device_id": "system", 00:16:42.413 "dma_device_type": 1 00:16:42.413 }, 00:16:42.413 { 00:16:42.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.413 "dma_device_type": 2 00:16:42.413 } 00:16:42.413 ], 00:16:42.413 "driver_specific": {} 00:16:42.413 } 00:16:42.413 ] 00:16:42.413 18:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:42.413 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:42.413 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:42.413 18:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:42.672 BaseBdev3 00:16:42.672 18:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:42.672 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:42.672 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:42.672 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:42.672 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:42.672 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:42.672 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:42.672 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:42.932 [ 00:16:42.932 { 00:16:42.932 "name": "BaseBdev3", 00:16:42.932 "aliases": [ 00:16:42.932 "6c5ff594-724a-49c2-a56a-03166cb648a8" 00:16:42.932 ], 00:16:42.932 "product_name": "Malloc disk", 00:16:42.932 "block_size": 512, 00:16:42.932 "num_blocks": 65536, 00:16:42.932 "uuid": "6c5ff594-724a-49c2-a56a-03166cb648a8", 00:16:42.932 "assigned_rate_limits": { 00:16:42.932 "rw_ios_per_sec": 0, 00:16:42.932 "rw_mbytes_per_sec": 0, 00:16:42.932 "r_mbytes_per_sec": 0, 00:16:42.932 "w_mbytes_per_sec": 0 00:16:42.932 }, 00:16:42.932 "claimed": false, 00:16:42.932 "zoned": false, 00:16:42.932 "supported_io_types": { 00:16:42.932 "read": true, 00:16:42.932 "write": true, 00:16:42.932 "unmap": true, 00:16:42.932 "flush": true, 00:16:42.932 "reset": true, 00:16:42.932 "nvme_admin": false, 00:16:42.932 "nvme_io": false, 00:16:42.932 "nvme_io_md": false, 00:16:42.932 "write_zeroes": true, 00:16:42.932 "zcopy": true, 00:16:42.932 "get_zone_info": false, 00:16:42.932 "zone_management": false, 00:16:42.932 "zone_append": false, 00:16:42.932 "compare": false, 00:16:42.932 "compare_and_write": false, 00:16:42.932 "abort": true, 00:16:42.932 "seek_hole": false, 00:16:42.932 "seek_data": false, 00:16:42.932 "copy": true, 00:16:42.932 "nvme_iov_md": false 00:16:42.932 }, 00:16:42.932 "memory_domains": [ 00:16:42.932 { 00:16:42.932 "dma_device_id": "system", 00:16:42.932 "dma_device_type": 1 00:16:42.932 }, 00:16:42.932 { 00:16:42.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.932 "dma_device_type": 2 00:16:42.932 } 00:16:42.932 ], 00:16:42.932 "driver_specific": {} 00:16:42.932 } 00:16:42.932 ] 00:16:42.932 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:42.932 18:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:42.932 18:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:42.932 18:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:43.191 BaseBdev4 00:16:43.191 18:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:43.191 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:43.191 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:43.191 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:43.191 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:43.191 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:43.191 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:43.191 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:43.450 [ 00:16:43.450 { 00:16:43.450 "name": "BaseBdev4", 00:16:43.450 "aliases": [ 00:16:43.450 "422e87ae-35f6-4226-a3e4-2c548406c03a" 00:16:43.450 ], 00:16:43.450 "product_name": "Malloc disk", 00:16:43.450 "block_size": 512, 00:16:43.450 "num_blocks": 65536, 00:16:43.450 "uuid": "422e87ae-35f6-4226-a3e4-2c548406c03a", 00:16:43.450 "assigned_rate_limits": { 00:16:43.450 "rw_ios_per_sec": 0, 00:16:43.450 "rw_mbytes_per_sec": 0, 00:16:43.450 "r_mbytes_per_sec": 0, 00:16:43.450 "w_mbytes_per_sec": 0 00:16:43.450 }, 00:16:43.450 "claimed": false, 00:16:43.450 "zoned": false, 00:16:43.450 "supported_io_types": { 00:16:43.450 "read": true, 00:16:43.450 "write": true, 00:16:43.450 "unmap": true, 00:16:43.450 "flush": true, 00:16:43.450 "reset": true, 00:16:43.450 "nvme_admin": false, 00:16:43.450 "nvme_io": false, 00:16:43.450 "nvme_io_md": false, 00:16:43.450 "write_zeroes": true, 00:16:43.450 "zcopy": true, 00:16:43.450 "get_zone_info": false, 00:16:43.450 "zone_management": false, 00:16:43.450 "zone_append": false, 00:16:43.450 "compare": false, 00:16:43.450 "compare_and_write": false, 00:16:43.450 "abort": true, 00:16:43.450 "seek_hole": false, 00:16:43.450 "seek_data": false, 00:16:43.450 "copy": true, 00:16:43.450 "nvme_iov_md": false 00:16:43.450 }, 00:16:43.450 "memory_domains": [ 00:16:43.450 { 00:16:43.450 "dma_device_id": "system", 00:16:43.450 "dma_device_type": 1 00:16:43.450 }, 00:16:43.450 { 00:16:43.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.450 "dma_device_type": 2 00:16:43.450 } 00:16:43.450 ], 00:16:43.450 "driver_specific": {} 00:16:43.450 } 00:16:43.450 ] 00:16:43.450 18:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:43.450 18:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:43.450 18:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:43.450 18:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:43.709 [2024-07-24 18:19:52.053516] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:43.709 [2024-07-24 18:19:52.053547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:43.709 [2024-07-24 18:19:52.053559] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.709 [2024-07-24 18:19:52.054437] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.709 [2024-07-24 18:19:52.054466] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.709 "name": "Existed_Raid", 00:16:43.709 "uuid": "cf15a35c-8f82-4d52-8ad4-dd4825fbc4ff", 00:16:43.709 "strip_size_kb": 64, 00:16:43.709 "state": "configuring", 00:16:43.709 "raid_level": "concat", 00:16:43.709 "superblock": true, 00:16:43.709 "num_base_bdevs": 4, 00:16:43.709 "num_base_bdevs_discovered": 3, 00:16:43.709 "num_base_bdevs_operational": 4, 00:16:43.709 "base_bdevs_list": [ 00:16:43.709 { 00:16:43.709 "name": "BaseBdev1", 00:16:43.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.709 "is_configured": false, 00:16:43.709 "data_offset": 0, 00:16:43.709 "data_size": 0 00:16:43.709 }, 00:16:43.709 { 00:16:43.709 "name": "BaseBdev2", 00:16:43.709 "uuid": "8a573068-6c12-4d62-91f4-00a85e71c8f3", 00:16:43.709 "is_configured": true, 00:16:43.709 "data_offset": 2048, 00:16:43.709 "data_size": 63488 00:16:43.709 }, 00:16:43.709 { 00:16:43.709 "name": "BaseBdev3", 00:16:43.709 "uuid": "6c5ff594-724a-49c2-a56a-03166cb648a8", 00:16:43.709 "is_configured": true, 00:16:43.709 "data_offset": 2048, 00:16:43.709 "data_size": 63488 00:16:43.709 }, 00:16:43.709 { 00:16:43.709 "name": "BaseBdev4", 00:16:43.709 "uuid": "422e87ae-35f6-4226-a3e4-2c548406c03a", 00:16:43.709 "is_configured": true, 00:16:43.709 "data_offset": 2048, 00:16:43.709 "data_size": 63488 00:16:43.709 } 00:16:43.709 ] 00:16:43.709 }' 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.709 18:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.278 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:44.278 [2024-07-24 18:19:52.847546] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:44.278 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:44.278 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:44.278 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:44.278 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:44.278 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:44.278 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:44.278 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.278 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.278 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.278 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.278 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.278 18:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.537 18:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.537 "name": "Existed_Raid", 00:16:44.537 "uuid": "cf15a35c-8f82-4d52-8ad4-dd4825fbc4ff", 00:16:44.537 "strip_size_kb": 64, 00:16:44.537 "state": "configuring", 00:16:44.537 "raid_level": "concat", 00:16:44.537 "superblock": true, 00:16:44.537 "num_base_bdevs": 4, 00:16:44.537 "num_base_bdevs_discovered": 2, 00:16:44.537 "num_base_bdevs_operational": 4, 00:16:44.537 "base_bdevs_list": [ 00:16:44.537 { 00:16:44.537 "name": "BaseBdev1", 00:16:44.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.537 "is_configured": false, 00:16:44.537 "data_offset": 0, 00:16:44.537 "data_size": 0 00:16:44.537 }, 00:16:44.537 { 00:16:44.537 "name": null, 00:16:44.537 "uuid": "8a573068-6c12-4d62-91f4-00a85e71c8f3", 00:16:44.537 "is_configured": false, 00:16:44.537 "data_offset": 2048, 00:16:44.537 "data_size": 63488 00:16:44.537 }, 00:16:44.537 { 00:16:44.537 "name": "BaseBdev3", 00:16:44.537 "uuid": "6c5ff594-724a-49c2-a56a-03166cb648a8", 00:16:44.537 "is_configured": true, 00:16:44.537 "data_offset": 2048, 00:16:44.537 "data_size": 63488 00:16:44.537 }, 00:16:44.537 { 00:16:44.537 "name": "BaseBdev4", 00:16:44.537 "uuid": "422e87ae-35f6-4226-a3e4-2c548406c03a", 00:16:44.537 "is_configured": true, 00:16:44.537 "data_offset": 2048, 00:16:44.537 "data_size": 63488 00:16:44.537 } 00:16:44.537 ] 00:16:44.537 }' 00:16:44.537 18:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.537 18:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.105 18:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.105 18:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:45.105 18:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:45.105 18:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:45.363 [2024-07-24 18:19:53.812840] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.363 BaseBdev1 00:16:45.363 18:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:45.363 18:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:45.363 18:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:45.363 18:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:45.363 18:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:45.363 18:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:45.363 18:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:45.622 18:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:45.622 [ 00:16:45.622 { 00:16:45.622 "name": "BaseBdev1", 00:16:45.622 "aliases": [ 00:16:45.622 "74a74312-9433-4671-9624-2bd7981ef9ab" 00:16:45.622 ], 00:16:45.622 "product_name": "Malloc disk", 00:16:45.622 "block_size": 512, 00:16:45.622 "num_blocks": 65536, 00:16:45.622 "uuid": "74a74312-9433-4671-9624-2bd7981ef9ab", 00:16:45.622 "assigned_rate_limits": { 00:16:45.622 "rw_ios_per_sec": 0, 00:16:45.622 "rw_mbytes_per_sec": 0, 00:16:45.622 "r_mbytes_per_sec": 0, 00:16:45.622 "w_mbytes_per_sec": 0 00:16:45.622 }, 00:16:45.622 "claimed": true, 00:16:45.622 "claim_type": "exclusive_write", 00:16:45.622 "zoned": false, 00:16:45.622 "supported_io_types": { 00:16:45.622 "read": true, 00:16:45.622 "write": true, 00:16:45.622 "unmap": true, 00:16:45.623 "flush": true, 00:16:45.623 "reset": true, 00:16:45.623 "nvme_admin": false, 00:16:45.623 "nvme_io": false, 00:16:45.623 "nvme_io_md": false, 00:16:45.623 "write_zeroes": true, 00:16:45.623 "zcopy": true, 00:16:45.623 "get_zone_info": false, 00:16:45.623 "zone_management": false, 00:16:45.623 "zone_append": false, 00:16:45.623 "compare": false, 00:16:45.623 "compare_and_write": false, 00:16:45.623 "abort": true, 00:16:45.623 "seek_hole": false, 00:16:45.623 "seek_data": false, 00:16:45.623 "copy": true, 00:16:45.623 "nvme_iov_md": false 00:16:45.623 }, 00:16:45.623 "memory_domains": [ 00:16:45.623 { 00:16:45.623 "dma_device_id": "system", 00:16:45.623 "dma_device_type": 1 00:16:45.623 }, 00:16:45.623 { 00:16:45.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.623 "dma_device_type": 2 00:16:45.623 } 00:16:45.623 ], 00:16:45.623 "driver_specific": {} 00:16:45.623 } 00:16:45.623 ] 00:16:45.623 18:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:45.623 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:45.623 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:45.623 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:45.623 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:45.623 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:45.623 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:45.623 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.623 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.623 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.623 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.623 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.623 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.882 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.882 "name": "Existed_Raid", 00:16:45.882 "uuid": "cf15a35c-8f82-4d52-8ad4-dd4825fbc4ff", 00:16:45.882 "strip_size_kb": 64, 00:16:45.882 "state": "configuring", 00:16:45.882 "raid_level": "concat", 00:16:45.882 "superblock": true, 00:16:45.882 "num_base_bdevs": 4, 00:16:45.882 "num_base_bdevs_discovered": 3, 00:16:45.882 "num_base_bdevs_operational": 4, 00:16:45.882 "base_bdevs_list": [ 00:16:45.882 { 00:16:45.882 "name": "BaseBdev1", 00:16:45.882 "uuid": "74a74312-9433-4671-9624-2bd7981ef9ab", 00:16:45.882 "is_configured": true, 00:16:45.882 "data_offset": 2048, 00:16:45.882 "data_size": 63488 00:16:45.882 }, 00:16:45.882 { 00:16:45.882 "name": null, 00:16:45.882 "uuid": "8a573068-6c12-4d62-91f4-00a85e71c8f3", 00:16:45.882 "is_configured": false, 00:16:45.882 "data_offset": 2048, 00:16:45.882 "data_size": 63488 00:16:45.882 }, 00:16:45.882 { 00:16:45.882 "name": "BaseBdev3", 00:16:45.882 "uuid": "6c5ff594-724a-49c2-a56a-03166cb648a8", 00:16:45.882 "is_configured": true, 00:16:45.882 "data_offset": 2048, 00:16:45.882 "data_size": 63488 00:16:45.882 }, 00:16:45.882 { 00:16:45.882 "name": "BaseBdev4", 00:16:45.882 "uuid": "422e87ae-35f6-4226-a3e4-2c548406c03a", 00:16:45.882 "is_configured": true, 00:16:45.882 "data_offset": 2048, 00:16:45.882 "data_size": 63488 00:16:45.882 } 00:16:45.882 ] 00:16:45.882 }' 00:16:45.882 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.882 18:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.454 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.454 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:46.454 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:46.454 18:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:46.716 [2024-07-24 18:19:55.112210] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:46.716 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:46.716 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:46.716 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:46.716 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:46.716 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:46.716 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:46.716 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.716 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.716 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.716 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.716 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.716 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.974 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:46.974 "name": "Existed_Raid", 00:16:46.974 "uuid": "cf15a35c-8f82-4d52-8ad4-dd4825fbc4ff", 00:16:46.974 "strip_size_kb": 64, 00:16:46.974 "state": "configuring", 00:16:46.974 "raid_level": "concat", 00:16:46.974 "superblock": true, 00:16:46.974 "num_base_bdevs": 4, 00:16:46.974 "num_base_bdevs_discovered": 2, 00:16:46.974 "num_base_bdevs_operational": 4, 00:16:46.974 "base_bdevs_list": [ 00:16:46.974 { 00:16:46.974 "name": "BaseBdev1", 00:16:46.974 "uuid": "74a74312-9433-4671-9624-2bd7981ef9ab", 00:16:46.974 "is_configured": true, 00:16:46.974 "data_offset": 2048, 00:16:46.974 "data_size": 63488 00:16:46.974 }, 00:16:46.974 { 00:16:46.974 "name": null, 00:16:46.974 "uuid": "8a573068-6c12-4d62-91f4-00a85e71c8f3", 00:16:46.974 "is_configured": false, 00:16:46.974 "data_offset": 2048, 00:16:46.974 "data_size": 63488 00:16:46.974 }, 00:16:46.974 { 00:16:46.974 "name": null, 00:16:46.974 "uuid": "6c5ff594-724a-49c2-a56a-03166cb648a8", 00:16:46.974 "is_configured": false, 00:16:46.975 "data_offset": 2048, 00:16:46.975 "data_size": 63488 00:16:46.975 }, 00:16:46.975 { 00:16:46.975 "name": "BaseBdev4", 00:16:46.975 "uuid": "422e87ae-35f6-4226-a3e4-2c548406c03a", 00:16:46.975 "is_configured": true, 00:16:46.975 "data_offset": 2048, 00:16:46.975 "data_size": 63488 00:16:46.975 } 00:16:46.975 ] 00:16:46.975 }' 00:16:46.975 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:46.975 18:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.543 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.543 18:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:47.543 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:47.543 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:47.802 [2024-07-24 18:19:56.158920] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.802 "name": "Existed_Raid", 00:16:47.802 "uuid": "cf15a35c-8f82-4d52-8ad4-dd4825fbc4ff", 00:16:47.802 "strip_size_kb": 64, 00:16:47.802 "state": "configuring", 00:16:47.802 "raid_level": "concat", 00:16:47.802 "superblock": true, 00:16:47.802 "num_base_bdevs": 4, 00:16:47.802 "num_base_bdevs_discovered": 3, 00:16:47.802 "num_base_bdevs_operational": 4, 00:16:47.802 "base_bdevs_list": [ 00:16:47.802 { 00:16:47.802 "name": "BaseBdev1", 00:16:47.802 "uuid": "74a74312-9433-4671-9624-2bd7981ef9ab", 00:16:47.802 "is_configured": true, 00:16:47.802 "data_offset": 2048, 00:16:47.802 "data_size": 63488 00:16:47.802 }, 00:16:47.802 { 00:16:47.802 "name": null, 00:16:47.802 "uuid": "8a573068-6c12-4d62-91f4-00a85e71c8f3", 00:16:47.802 "is_configured": false, 00:16:47.802 "data_offset": 2048, 00:16:47.802 "data_size": 63488 00:16:47.802 }, 00:16:47.802 { 00:16:47.802 "name": "BaseBdev3", 00:16:47.802 "uuid": "6c5ff594-724a-49c2-a56a-03166cb648a8", 00:16:47.802 "is_configured": true, 00:16:47.802 "data_offset": 2048, 00:16:47.802 "data_size": 63488 00:16:47.802 }, 00:16:47.802 { 00:16:47.802 "name": "BaseBdev4", 00:16:47.802 "uuid": "422e87ae-35f6-4226-a3e4-2c548406c03a", 00:16:47.802 "is_configured": true, 00:16:47.802 "data_offset": 2048, 00:16:47.802 "data_size": 63488 00:16:47.802 } 00:16:47.802 ] 00:16:47.802 }' 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.802 18:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.370 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.370 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:48.629 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:48.629 18:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:48.629 [2024-07-24 18:19:57.133443] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:48.629 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:48.629 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:48.629 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:48.629 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:48.629 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:48.629 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:48.629 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.629 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.629 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.629 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.629 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.629 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.888 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.888 "name": "Existed_Raid", 00:16:48.888 "uuid": "cf15a35c-8f82-4d52-8ad4-dd4825fbc4ff", 00:16:48.888 "strip_size_kb": 64, 00:16:48.888 "state": "configuring", 00:16:48.888 "raid_level": "concat", 00:16:48.888 "superblock": true, 00:16:48.888 "num_base_bdevs": 4, 00:16:48.888 "num_base_bdevs_discovered": 2, 00:16:48.888 "num_base_bdevs_operational": 4, 00:16:48.888 "base_bdevs_list": [ 00:16:48.888 { 00:16:48.888 "name": null, 00:16:48.888 "uuid": "74a74312-9433-4671-9624-2bd7981ef9ab", 00:16:48.888 "is_configured": false, 00:16:48.888 "data_offset": 2048, 00:16:48.888 "data_size": 63488 00:16:48.888 }, 00:16:48.888 { 00:16:48.888 "name": null, 00:16:48.888 "uuid": "8a573068-6c12-4d62-91f4-00a85e71c8f3", 00:16:48.888 "is_configured": false, 00:16:48.888 "data_offset": 2048, 00:16:48.888 "data_size": 63488 00:16:48.888 }, 00:16:48.888 { 00:16:48.888 "name": "BaseBdev3", 00:16:48.888 "uuid": "6c5ff594-724a-49c2-a56a-03166cb648a8", 00:16:48.888 "is_configured": true, 00:16:48.888 "data_offset": 2048, 00:16:48.889 "data_size": 63488 00:16:48.889 }, 00:16:48.889 { 00:16:48.889 "name": "BaseBdev4", 00:16:48.889 "uuid": "422e87ae-35f6-4226-a3e4-2c548406c03a", 00:16:48.889 "is_configured": true, 00:16:48.889 "data_offset": 2048, 00:16:48.889 "data_size": 63488 00:16:48.889 } 00:16:48.889 ] 00:16:48.889 }' 00:16:48.889 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.889 18:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.457 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:49.458 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.458 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:49.458 18:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:49.717 [2024-07-24 18:19:58.125546] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.717 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:49.717 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:49.717 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:49.717 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:49.717 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:49.717 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:49.717 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.717 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.717 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.717 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.717 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.717 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.976 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.976 "name": "Existed_Raid", 00:16:49.976 "uuid": "cf15a35c-8f82-4d52-8ad4-dd4825fbc4ff", 00:16:49.976 "strip_size_kb": 64, 00:16:49.976 "state": "configuring", 00:16:49.976 "raid_level": "concat", 00:16:49.976 "superblock": true, 00:16:49.976 "num_base_bdevs": 4, 00:16:49.976 "num_base_bdevs_discovered": 3, 00:16:49.976 "num_base_bdevs_operational": 4, 00:16:49.976 "base_bdevs_list": [ 00:16:49.976 { 00:16:49.976 "name": null, 00:16:49.976 "uuid": "74a74312-9433-4671-9624-2bd7981ef9ab", 00:16:49.976 "is_configured": false, 00:16:49.976 "data_offset": 2048, 00:16:49.976 "data_size": 63488 00:16:49.976 }, 00:16:49.976 { 00:16:49.976 "name": "BaseBdev2", 00:16:49.976 "uuid": "8a573068-6c12-4d62-91f4-00a85e71c8f3", 00:16:49.976 "is_configured": true, 00:16:49.976 "data_offset": 2048, 00:16:49.976 "data_size": 63488 00:16:49.976 }, 00:16:49.976 { 00:16:49.976 "name": "BaseBdev3", 00:16:49.976 "uuid": "6c5ff594-724a-49c2-a56a-03166cb648a8", 00:16:49.976 "is_configured": true, 00:16:49.976 "data_offset": 2048, 00:16:49.976 "data_size": 63488 00:16:49.976 }, 00:16:49.976 { 00:16:49.976 "name": "BaseBdev4", 00:16:49.976 "uuid": "422e87ae-35f6-4226-a3e4-2c548406c03a", 00:16:49.976 "is_configured": true, 00:16:49.976 "data_offset": 2048, 00:16:49.976 "data_size": 63488 00:16:49.976 } 00:16:49.976 ] 00:16:49.976 }' 00:16:49.976 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.976 18:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.235 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:50.235 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.495 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:50.495 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:50.495 18:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.754 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 74a74312-9433-4671-9624-2bd7981ef9ab 00:16:50.754 [2024-07-24 18:19:59.327413] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:50.754 [2024-07-24 18:19:59.327538] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xafacc0 00:16:50.754 [2024-07-24 18:19:59.327548] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:50.754 [2024-07-24 18:19:59.327673] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa0e0d0 00:16:50.754 [2024-07-24 18:19:59.327751] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xafacc0 00:16:50.754 [2024-07-24 18:19:59.327758] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xafacc0 00:16:50.754 [2024-07-24 18:19:59.327819] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.754 NewBaseBdev 00:16:50.754 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:50.754 18:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:50.754 18:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:50.754 18:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:50.754 18:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:50.754 18:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:50.754 18:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:51.014 18:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:51.273 [ 00:16:51.273 { 00:16:51.273 "name": "NewBaseBdev", 00:16:51.273 "aliases": [ 00:16:51.273 "74a74312-9433-4671-9624-2bd7981ef9ab" 00:16:51.273 ], 00:16:51.273 "product_name": "Malloc disk", 00:16:51.273 "block_size": 512, 00:16:51.273 "num_blocks": 65536, 00:16:51.273 "uuid": "74a74312-9433-4671-9624-2bd7981ef9ab", 00:16:51.273 "assigned_rate_limits": { 00:16:51.273 "rw_ios_per_sec": 0, 00:16:51.273 "rw_mbytes_per_sec": 0, 00:16:51.273 "r_mbytes_per_sec": 0, 00:16:51.273 "w_mbytes_per_sec": 0 00:16:51.273 }, 00:16:51.273 "claimed": true, 00:16:51.273 "claim_type": "exclusive_write", 00:16:51.273 "zoned": false, 00:16:51.273 "supported_io_types": { 00:16:51.273 "read": true, 00:16:51.273 "write": true, 00:16:51.273 "unmap": true, 00:16:51.273 "flush": true, 00:16:51.273 "reset": true, 00:16:51.273 "nvme_admin": false, 00:16:51.273 "nvme_io": false, 00:16:51.273 "nvme_io_md": false, 00:16:51.273 "write_zeroes": true, 00:16:51.273 "zcopy": true, 00:16:51.273 "get_zone_info": false, 00:16:51.273 "zone_management": false, 00:16:51.273 "zone_append": false, 00:16:51.273 "compare": false, 00:16:51.273 "compare_and_write": false, 00:16:51.273 "abort": true, 00:16:51.273 "seek_hole": false, 00:16:51.273 "seek_data": false, 00:16:51.273 "copy": true, 00:16:51.273 "nvme_iov_md": false 00:16:51.273 }, 00:16:51.273 "memory_domains": [ 00:16:51.273 { 00:16:51.273 "dma_device_id": "system", 00:16:51.273 "dma_device_type": 1 00:16:51.273 }, 00:16:51.273 { 00:16:51.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.273 "dma_device_type": 2 00:16:51.273 } 00:16:51.273 ], 00:16:51.273 "driver_specific": {} 00:16:51.273 } 00:16:51.273 ] 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.273 "name": "Existed_Raid", 00:16:51.273 "uuid": "cf15a35c-8f82-4d52-8ad4-dd4825fbc4ff", 00:16:51.273 "strip_size_kb": 64, 00:16:51.273 "state": "online", 00:16:51.273 "raid_level": "concat", 00:16:51.273 "superblock": true, 00:16:51.273 "num_base_bdevs": 4, 00:16:51.273 "num_base_bdevs_discovered": 4, 00:16:51.273 "num_base_bdevs_operational": 4, 00:16:51.273 "base_bdevs_list": [ 00:16:51.273 { 00:16:51.273 "name": "NewBaseBdev", 00:16:51.273 "uuid": "74a74312-9433-4671-9624-2bd7981ef9ab", 00:16:51.273 "is_configured": true, 00:16:51.273 "data_offset": 2048, 00:16:51.273 "data_size": 63488 00:16:51.273 }, 00:16:51.273 { 00:16:51.273 "name": "BaseBdev2", 00:16:51.273 "uuid": "8a573068-6c12-4d62-91f4-00a85e71c8f3", 00:16:51.273 "is_configured": true, 00:16:51.273 "data_offset": 2048, 00:16:51.273 "data_size": 63488 00:16:51.273 }, 00:16:51.273 { 00:16:51.273 "name": "BaseBdev3", 00:16:51.273 "uuid": "6c5ff594-724a-49c2-a56a-03166cb648a8", 00:16:51.273 "is_configured": true, 00:16:51.273 "data_offset": 2048, 00:16:51.273 "data_size": 63488 00:16:51.273 }, 00:16:51.273 { 00:16:51.273 "name": "BaseBdev4", 00:16:51.273 "uuid": "422e87ae-35f6-4226-a3e4-2c548406c03a", 00:16:51.273 "is_configured": true, 00:16:51.273 "data_offset": 2048, 00:16:51.273 "data_size": 63488 00:16:51.273 } 00:16:51.273 ] 00:16:51.273 }' 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.273 18:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.841 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:51.841 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:51.841 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:51.841 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:51.841 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:51.841 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:51.841 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:51.841 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:52.100 [2024-07-24 18:20:00.498673] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.100 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:52.100 "name": "Existed_Raid", 00:16:52.100 "aliases": [ 00:16:52.100 "cf15a35c-8f82-4d52-8ad4-dd4825fbc4ff" 00:16:52.100 ], 00:16:52.100 "product_name": "Raid Volume", 00:16:52.100 "block_size": 512, 00:16:52.100 "num_blocks": 253952, 00:16:52.100 "uuid": "cf15a35c-8f82-4d52-8ad4-dd4825fbc4ff", 00:16:52.100 "assigned_rate_limits": { 00:16:52.100 "rw_ios_per_sec": 0, 00:16:52.100 "rw_mbytes_per_sec": 0, 00:16:52.100 "r_mbytes_per_sec": 0, 00:16:52.100 "w_mbytes_per_sec": 0 00:16:52.100 }, 00:16:52.100 "claimed": false, 00:16:52.100 "zoned": false, 00:16:52.100 "supported_io_types": { 00:16:52.100 "read": true, 00:16:52.100 "write": true, 00:16:52.100 "unmap": true, 00:16:52.100 "flush": true, 00:16:52.100 "reset": true, 00:16:52.100 "nvme_admin": false, 00:16:52.100 "nvme_io": false, 00:16:52.100 "nvme_io_md": false, 00:16:52.100 "write_zeroes": true, 00:16:52.100 "zcopy": false, 00:16:52.100 "get_zone_info": false, 00:16:52.100 "zone_management": false, 00:16:52.100 "zone_append": false, 00:16:52.100 "compare": false, 00:16:52.100 "compare_and_write": false, 00:16:52.100 "abort": false, 00:16:52.100 "seek_hole": false, 00:16:52.100 "seek_data": false, 00:16:52.100 "copy": false, 00:16:52.100 "nvme_iov_md": false 00:16:52.100 }, 00:16:52.100 "memory_domains": [ 00:16:52.100 { 00:16:52.100 "dma_device_id": "system", 00:16:52.100 "dma_device_type": 1 00:16:52.100 }, 00:16:52.100 { 00:16:52.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.100 "dma_device_type": 2 00:16:52.100 }, 00:16:52.100 { 00:16:52.100 "dma_device_id": "system", 00:16:52.100 "dma_device_type": 1 00:16:52.100 }, 00:16:52.100 { 00:16:52.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.100 "dma_device_type": 2 00:16:52.100 }, 00:16:52.100 { 00:16:52.100 "dma_device_id": "system", 00:16:52.100 "dma_device_type": 1 00:16:52.100 }, 00:16:52.100 { 00:16:52.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.100 "dma_device_type": 2 00:16:52.100 }, 00:16:52.100 { 00:16:52.100 "dma_device_id": "system", 00:16:52.100 "dma_device_type": 1 00:16:52.100 }, 00:16:52.100 { 00:16:52.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.100 "dma_device_type": 2 00:16:52.100 } 00:16:52.100 ], 00:16:52.100 "driver_specific": { 00:16:52.100 "raid": { 00:16:52.100 "uuid": "cf15a35c-8f82-4d52-8ad4-dd4825fbc4ff", 00:16:52.100 "strip_size_kb": 64, 00:16:52.100 "state": "online", 00:16:52.100 "raid_level": "concat", 00:16:52.100 "superblock": true, 00:16:52.100 "num_base_bdevs": 4, 00:16:52.100 "num_base_bdevs_discovered": 4, 00:16:52.100 "num_base_bdevs_operational": 4, 00:16:52.100 "base_bdevs_list": [ 00:16:52.100 { 00:16:52.100 "name": "NewBaseBdev", 00:16:52.100 "uuid": "74a74312-9433-4671-9624-2bd7981ef9ab", 00:16:52.100 "is_configured": true, 00:16:52.100 "data_offset": 2048, 00:16:52.100 "data_size": 63488 00:16:52.100 }, 00:16:52.100 { 00:16:52.100 "name": "BaseBdev2", 00:16:52.100 "uuid": "8a573068-6c12-4d62-91f4-00a85e71c8f3", 00:16:52.100 "is_configured": true, 00:16:52.100 "data_offset": 2048, 00:16:52.100 "data_size": 63488 00:16:52.100 }, 00:16:52.100 { 00:16:52.100 "name": "BaseBdev3", 00:16:52.100 "uuid": "6c5ff594-724a-49c2-a56a-03166cb648a8", 00:16:52.100 "is_configured": true, 00:16:52.100 "data_offset": 2048, 00:16:52.100 "data_size": 63488 00:16:52.100 }, 00:16:52.100 { 00:16:52.100 "name": "BaseBdev4", 00:16:52.100 "uuid": "422e87ae-35f6-4226-a3e4-2c548406c03a", 00:16:52.100 "is_configured": true, 00:16:52.100 "data_offset": 2048, 00:16:52.100 "data_size": 63488 00:16:52.100 } 00:16:52.100 ] 00:16:52.100 } 00:16:52.100 } 00:16:52.100 }' 00:16:52.100 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:52.101 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:52.101 BaseBdev2 00:16:52.101 BaseBdev3 00:16:52.101 BaseBdev4' 00:16:52.101 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:52.101 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:52.101 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:52.360 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:52.360 "name": "NewBaseBdev", 00:16:52.360 "aliases": [ 00:16:52.360 "74a74312-9433-4671-9624-2bd7981ef9ab" 00:16:52.360 ], 00:16:52.360 "product_name": "Malloc disk", 00:16:52.360 "block_size": 512, 00:16:52.360 "num_blocks": 65536, 00:16:52.360 "uuid": "74a74312-9433-4671-9624-2bd7981ef9ab", 00:16:52.360 "assigned_rate_limits": { 00:16:52.360 "rw_ios_per_sec": 0, 00:16:52.360 "rw_mbytes_per_sec": 0, 00:16:52.360 "r_mbytes_per_sec": 0, 00:16:52.360 "w_mbytes_per_sec": 0 00:16:52.360 }, 00:16:52.360 "claimed": true, 00:16:52.360 "claim_type": "exclusive_write", 00:16:52.360 "zoned": false, 00:16:52.360 "supported_io_types": { 00:16:52.360 "read": true, 00:16:52.360 "write": true, 00:16:52.360 "unmap": true, 00:16:52.360 "flush": true, 00:16:52.360 "reset": true, 00:16:52.360 "nvme_admin": false, 00:16:52.360 "nvme_io": false, 00:16:52.360 "nvme_io_md": false, 00:16:52.360 "write_zeroes": true, 00:16:52.360 "zcopy": true, 00:16:52.360 "get_zone_info": false, 00:16:52.360 "zone_management": false, 00:16:52.360 "zone_append": false, 00:16:52.360 "compare": false, 00:16:52.360 "compare_and_write": false, 00:16:52.360 "abort": true, 00:16:52.360 "seek_hole": false, 00:16:52.360 "seek_data": false, 00:16:52.360 "copy": true, 00:16:52.360 "nvme_iov_md": false 00:16:52.360 }, 00:16:52.360 "memory_domains": [ 00:16:52.360 { 00:16:52.360 "dma_device_id": "system", 00:16:52.360 "dma_device_type": 1 00:16:52.360 }, 00:16:52.360 { 00:16:52.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.360 "dma_device_type": 2 00:16:52.360 } 00:16:52.360 ], 00:16:52.360 "driver_specific": {} 00:16:52.360 }' 00:16:52.360 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.360 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.360 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:52.360 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.360 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.360 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:52.360 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.360 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.620 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.620 18:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.620 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.620 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:52.620 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:52.620 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:52.620 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:52.878 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:52.878 "name": "BaseBdev2", 00:16:52.878 "aliases": [ 00:16:52.878 "8a573068-6c12-4d62-91f4-00a85e71c8f3" 00:16:52.878 ], 00:16:52.878 "product_name": "Malloc disk", 00:16:52.878 "block_size": 512, 00:16:52.878 "num_blocks": 65536, 00:16:52.878 "uuid": "8a573068-6c12-4d62-91f4-00a85e71c8f3", 00:16:52.878 "assigned_rate_limits": { 00:16:52.878 "rw_ios_per_sec": 0, 00:16:52.878 "rw_mbytes_per_sec": 0, 00:16:52.878 "r_mbytes_per_sec": 0, 00:16:52.878 "w_mbytes_per_sec": 0 00:16:52.878 }, 00:16:52.878 "claimed": true, 00:16:52.878 "claim_type": "exclusive_write", 00:16:52.878 "zoned": false, 00:16:52.878 "supported_io_types": { 00:16:52.878 "read": true, 00:16:52.878 "write": true, 00:16:52.878 "unmap": true, 00:16:52.878 "flush": true, 00:16:52.878 "reset": true, 00:16:52.878 "nvme_admin": false, 00:16:52.878 "nvme_io": false, 00:16:52.879 "nvme_io_md": false, 00:16:52.879 "write_zeroes": true, 00:16:52.879 "zcopy": true, 00:16:52.879 "get_zone_info": false, 00:16:52.879 "zone_management": false, 00:16:52.879 "zone_append": false, 00:16:52.879 "compare": false, 00:16:52.879 "compare_and_write": false, 00:16:52.879 "abort": true, 00:16:52.879 "seek_hole": false, 00:16:52.879 "seek_data": false, 00:16:52.879 "copy": true, 00:16:52.879 "nvme_iov_md": false 00:16:52.879 }, 00:16:52.879 "memory_domains": [ 00:16:52.879 { 00:16:52.879 "dma_device_id": "system", 00:16:52.879 "dma_device_type": 1 00:16:52.879 }, 00:16:52.879 { 00:16:52.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.879 "dma_device_type": 2 00:16:52.879 } 00:16:52.879 ], 00:16:52.879 "driver_specific": {} 00:16:52.879 }' 00:16:52.879 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.879 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.879 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:52.879 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.879 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.879 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:52.879 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.879 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.879 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.879 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.138 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.138 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:53.138 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:53.138 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:53.138 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:53.138 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:53.138 "name": "BaseBdev3", 00:16:53.138 "aliases": [ 00:16:53.138 "6c5ff594-724a-49c2-a56a-03166cb648a8" 00:16:53.138 ], 00:16:53.138 "product_name": "Malloc disk", 00:16:53.138 "block_size": 512, 00:16:53.138 "num_blocks": 65536, 00:16:53.138 "uuid": "6c5ff594-724a-49c2-a56a-03166cb648a8", 00:16:53.138 "assigned_rate_limits": { 00:16:53.138 "rw_ios_per_sec": 0, 00:16:53.138 "rw_mbytes_per_sec": 0, 00:16:53.138 "r_mbytes_per_sec": 0, 00:16:53.138 "w_mbytes_per_sec": 0 00:16:53.138 }, 00:16:53.138 "claimed": true, 00:16:53.138 "claim_type": "exclusive_write", 00:16:53.138 "zoned": false, 00:16:53.138 "supported_io_types": { 00:16:53.138 "read": true, 00:16:53.138 "write": true, 00:16:53.138 "unmap": true, 00:16:53.138 "flush": true, 00:16:53.138 "reset": true, 00:16:53.138 "nvme_admin": false, 00:16:53.138 "nvme_io": false, 00:16:53.138 "nvme_io_md": false, 00:16:53.138 "write_zeroes": true, 00:16:53.138 "zcopy": true, 00:16:53.138 "get_zone_info": false, 00:16:53.138 "zone_management": false, 00:16:53.138 "zone_append": false, 00:16:53.138 "compare": false, 00:16:53.138 "compare_and_write": false, 00:16:53.138 "abort": true, 00:16:53.138 "seek_hole": false, 00:16:53.138 "seek_data": false, 00:16:53.138 "copy": true, 00:16:53.138 "nvme_iov_md": false 00:16:53.138 }, 00:16:53.138 "memory_domains": [ 00:16:53.138 { 00:16:53.138 "dma_device_id": "system", 00:16:53.138 "dma_device_type": 1 00:16:53.138 }, 00:16:53.138 { 00:16:53.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.138 "dma_device_type": 2 00:16:53.138 } 00:16:53.138 ], 00:16:53.138 "driver_specific": {} 00:16:53.138 }' 00:16:53.138 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.398 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.398 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:53.398 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.398 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.398 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:53.398 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.398 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.398 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:53.398 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.398 18:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.657 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:53.657 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:53.657 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:53.657 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:53.657 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:53.657 "name": "BaseBdev4", 00:16:53.657 "aliases": [ 00:16:53.657 "422e87ae-35f6-4226-a3e4-2c548406c03a" 00:16:53.657 ], 00:16:53.657 "product_name": "Malloc disk", 00:16:53.657 "block_size": 512, 00:16:53.657 "num_blocks": 65536, 00:16:53.657 "uuid": "422e87ae-35f6-4226-a3e4-2c548406c03a", 00:16:53.657 "assigned_rate_limits": { 00:16:53.657 "rw_ios_per_sec": 0, 00:16:53.657 "rw_mbytes_per_sec": 0, 00:16:53.657 "r_mbytes_per_sec": 0, 00:16:53.657 "w_mbytes_per_sec": 0 00:16:53.657 }, 00:16:53.657 "claimed": true, 00:16:53.657 "claim_type": "exclusive_write", 00:16:53.657 "zoned": false, 00:16:53.657 "supported_io_types": { 00:16:53.657 "read": true, 00:16:53.657 "write": true, 00:16:53.657 "unmap": true, 00:16:53.657 "flush": true, 00:16:53.657 "reset": true, 00:16:53.657 "nvme_admin": false, 00:16:53.657 "nvme_io": false, 00:16:53.657 "nvme_io_md": false, 00:16:53.657 "write_zeroes": true, 00:16:53.657 "zcopy": true, 00:16:53.657 "get_zone_info": false, 00:16:53.657 "zone_management": false, 00:16:53.657 "zone_append": false, 00:16:53.657 "compare": false, 00:16:53.657 "compare_and_write": false, 00:16:53.657 "abort": true, 00:16:53.657 "seek_hole": false, 00:16:53.657 "seek_data": false, 00:16:53.657 "copy": true, 00:16:53.657 "nvme_iov_md": false 00:16:53.657 }, 00:16:53.657 "memory_domains": [ 00:16:53.657 { 00:16:53.657 "dma_device_id": "system", 00:16:53.657 "dma_device_type": 1 00:16:53.657 }, 00:16:53.657 { 00:16:53.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.657 "dma_device_type": 2 00:16:53.657 } 00:16:53.657 ], 00:16:53.657 "driver_specific": {} 00:16:53.657 }' 00:16:53.657 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.657 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.917 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:53.917 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.917 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.917 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:53.917 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.917 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.917 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:53.917 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.917 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.917 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:53.917 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:54.176 [2024-07-24 18:20:02.644027] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.176 [2024-07-24 18:20:02.644045] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.176 [2024-07-24 18:20:02.644083] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.176 [2024-07-24 18:20:02.644127] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.177 [2024-07-24 18:20:02.644135] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xafacc0 name Existed_Raid, state offline 00:16:54.177 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2231525 00:16:54.177 18:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2231525 ']' 00:16:54.177 18:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2231525 00:16:54.177 18:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:54.177 18:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:54.177 18:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2231525 00:16:54.177 18:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:54.177 18:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:54.177 18:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2231525' 00:16:54.177 killing process with pid 2231525 00:16:54.177 18:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2231525 00:16:54.177 [2024-07-24 18:20:02.712825] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.177 18:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2231525 00:16:54.177 [2024-07-24 18:20:02.742146] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.436 18:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:54.436 00:16:54.436 real 0m23.757s 00:16:54.436 user 0m43.371s 00:16:54.436 sys 0m4.537s 00:16:54.436 18:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:54.436 18:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.436 ************************************ 00:16:54.436 END TEST raid_state_function_test_sb 00:16:54.436 ************************************ 00:16:54.436 18:20:02 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:16:54.436 18:20:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:54.436 18:20:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:54.436 18:20:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.436 ************************************ 00:16:54.436 START TEST raid_superblock_test 00:16:54.436 ************************************ 00:16:54.436 18:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:16:54.436 18:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:16:54.436 18:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:16:54.436 18:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2236213 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2236213 /var/tmp/spdk-raid.sock 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2236213 ']' 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:54.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:54.436 18:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.696 [2024-07-24 18:20:03.054351] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:16:54.696 [2024-07-24 18:20:03.054396] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236213 ] 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:01.0 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:01.1 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:01.2 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:01.3 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:01.4 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:01.5 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:01.6 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:01.7 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:02.0 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:02.1 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:02.2 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:02.3 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:02.4 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:02.5 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:02.6 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b3:02.7 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:01.0 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:01.1 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:01.2 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:01.3 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:01.4 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:01.5 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:01.6 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:01.7 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:02.0 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:02.1 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:02.2 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:02.3 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:02.4 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:02.5 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:02.6 cannot be used 00:16:54.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.696 EAL: Requested device 0000:b5:02.7 cannot be used 00:16:54.696 [2024-07-24 18:20:03.146443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.696 [2024-07-24 18:20:03.220010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.696 [2024-07-24 18:20:03.275225] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.696 [2024-07-24 18:20:03.275253] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.273 18:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:55.273 18:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:55.273 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:55.273 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:55.273 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:55.273 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:55.273 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:55.273 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:55.273 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:55.273 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:55.273 18:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:55.532 malloc1 00:16:55.532 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:55.798 [2024-07-24 18:20:04.167680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:55.798 [2024-07-24 18:20:04.167715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.798 [2024-07-24 18:20:04.167729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1958cb0 00:16:55.798 [2024-07-24 18:20:04.167737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.798 [2024-07-24 18:20:04.168828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.798 [2024-07-24 18:20:04.168851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:55.798 pt1 00:16:55.798 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:55.798 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:55.798 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:55.799 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:55.799 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:55.799 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:55.799 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:55.799 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:55.799 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:55.799 malloc2 00:16:55.799 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.068 [2024-07-24 18:20:04.516166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.068 [2024-07-24 18:20:04.516199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.068 [2024-07-24 18:20:04.516210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x195a0b0 00:16:56.068 [2024-07-24 18:20:04.516219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.068 [2024-07-24 18:20:04.517337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.068 [2024-07-24 18:20:04.517360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.068 pt2 00:16:56.068 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:56.068 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:56.068 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:16:56.068 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:16:56.068 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:56.068 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.068 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.068 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.068 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:56.327 malloc3 00:16:56.327 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:56.327 [2024-07-24 18:20:04.860570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:56.327 [2024-07-24 18:20:04.860606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.327 [2024-07-24 18:20:04.860617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1af0a80 00:16:56.327 [2024-07-24 18:20:04.860630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.327 [2024-07-24 18:20:04.861659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.327 [2024-07-24 18:20:04.861682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:56.327 pt3 00:16:56.327 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:56.327 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:56.327 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:16:56.327 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:16:56.327 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:56.327 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.327 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.327 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.327 18:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:56.586 malloc4 00:16:56.586 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:56.845 [2024-07-24 18:20:05.209051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:56.845 [2024-07-24 18:20:05.209084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.846 [2024-07-24 18:20:05.209097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1af33a0 00:16:56.846 [2024-07-24 18:20:05.209105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.846 [2024-07-24 18:20:05.210106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.846 [2024-07-24 18:20:05.210129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:56.846 pt4 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:56.846 [2024-07-24 18:20:05.381515] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:56.846 [2024-07-24 18:20:05.382342] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:56.846 [2024-07-24 18:20:05.382379] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:56.846 [2024-07-24 18:20:05.382405] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:56.846 [2024-07-24 18:20:05.382513] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1950c70 00:16:56.846 [2024-07-24 18:20:05.382520] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:56.846 [2024-07-24 18:20:05.382655] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x194eeb0 00:16:56.846 [2024-07-24 18:20:05.382749] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1950c70 00:16:56.846 [2024-07-24 18:20:05.382755] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1950c70 00:16:56.846 [2024-07-24 18:20:05.382814] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.846 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.104 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.104 "name": "raid_bdev1", 00:16:57.104 "uuid": "62576cf0-5838-497c-b15c-257655a67c0b", 00:16:57.104 "strip_size_kb": 64, 00:16:57.104 "state": "online", 00:16:57.104 "raid_level": "concat", 00:16:57.104 "superblock": true, 00:16:57.104 "num_base_bdevs": 4, 00:16:57.105 "num_base_bdevs_discovered": 4, 00:16:57.105 "num_base_bdevs_operational": 4, 00:16:57.105 "base_bdevs_list": [ 00:16:57.105 { 00:16:57.105 "name": "pt1", 00:16:57.105 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.105 "is_configured": true, 00:16:57.105 "data_offset": 2048, 00:16:57.105 "data_size": 63488 00:16:57.105 }, 00:16:57.105 { 00:16:57.105 "name": "pt2", 00:16:57.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.105 "is_configured": true, 00:16:57.105 "data_offset": 2048, 00:16:57.105 "data_size": 63488 00:16:57.105 }, 00:16:57.105 { 00:16:57.105 "name": "pt3", 00:16:57.105 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.105 "is_configured": true, 00:16:57.105 "data_offset": 2048, 00:16:57.105 "data_size": 63488 00:16:57.105 }, 00:16:57.105 { 00:16:57.105 "name": "pt4", 00:16:57.105 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:57.105 "is_configured": true, 00:16:57.105 "data_offset": 2048, 00:16:57.105 "data_size": 63488 00:16:57.105 } 00:16:57.105 ] 00:16:57.105 }' 00:16:57.105 18:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.105 18:20:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:57.673 [2024-07-24 18:20:06.179733] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:57.673 "name": "raid_bdev1", 00:16:57.673 "aliases": [ 00:16:57.673 "62576cf0-5838-497c-b15c-257655a67c0b" 00:16:57.673 ], 00:16:57.673 "product_name": "Raid Volume", 00:16:57.673 "block_size": 512, 00:16:57.673 "num_blocks": 253952, 00:16:57.673 "uuid": "62576cf0-5838-497c-b15c-257655a67c0b", 00:16:57.673 "assigned_rate_limits": { 00:16:57.673 "rw_ios_per_sec": 0, 00:16:57.673 "rw_mbytes_per_sec": 0, 00:16:57.673 "r_mbytes_per_sec": 0, 00:16:57.673 "w_mbytes_per_sec": 0 00:16:57.673 }, 00:16:57.673 "claimed": false, 00:16:57.673 "zoned": false, 00:16:57.673 "supported_io_types": { 00:16:57.673 "read": true, 00:16:57.673 "write": true, 00:16:57.673 "unmap": true, 00:16:57.673 "flush": true, 00:16:57.673 "reset": true, 00:16:57.673 "nvme_admin": false, 00:16:57.673 "nvme_io": false, 00:16:57.673 "nvme_io_md": false, 00:16:57.673 "write_zeroes": true, 00:16:57.673 "zcopy": false, 00:16:57.673 "get_zone_info": false, 00:16:57.673 "zone_management": false, 00:16:57.673 "zone_append": false, 00:16:57.673 "compare": false, 00:16:57.673 "compare_and_write": false, 00:16:57.673 "abort": false, 00:16:57.673 "seek_hole": false, 00:16:57.673 "seek_data": false, 00:16:57.673 "copy": false, 00:16:57.673 "nvme_iov_md": false 00:16:57.673 }, 00:16:57.673 "memory_domains": [ 00:16:57.673 { 00:16:57.673 "dma_device_id": "system", 00:16:57.673 "dma_device_type": 1 00:16:57.673 }, 00:16:57.673 { 00:16:57.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.673 "dma_device_type": 2 00:16:57.673 }, 00:16:57.673 { 00:16:57.673 "dma_device_id": "system", 00:16:57.673 "dma_device_type": 1 00:16:57.673 }, 00:16:57.673 { 00:16:57.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.673 "dma_device_type": 2 00:16:57.673 }, 00:16:57.673 { 00:16:57.673 "dma_device_id": "system", 00:16:57.673 "dma_device_type": 1 00:16:57.673 }, 00:16:57.673 { 00:16:57.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.673 "dma_device_type": 2 00:16:57.673 }, 00:16:57.673 { 00:16:57.673 "dma_device_id": "system", 00:16:57.673 "dma_device_type": 1 00:16:57.673 }, 00:16:57.673 { 00:16:57.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.673 "dma_device_type": 2 00:16:57.673 } 00:16:57.673 ], 00:16:57.673 "driver_specific": { 00:16:57.673 "raid": { 00:16:57.673 "uuid": "62576cf0-5838-497c-b15c-257655a67c0b", 00:16:57.673 "strip_size_kb": 64, 00:16:57.673 "state": "online", 00:16:57.673 "raid_level": "concat", 00:16:57.673 "superblock": true, 00:16:57.673 "num_base_bdevs": 4, 00:16:57.673 "num_base_bdevs_discovered": 4, 00:16:57.673 "num_base_bdevs_operational": 4, 00:16:57.673 "base_bdevs_list": [ 00:16:57.673 { 00:16:57.673 "name": "pt1", 00:16:57.673 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.673 "is_configured": true, 00:16:57.673 "data_offset": 2048, 00:16:57.673 "data_size": 63488 00:16:57.673 }, 00:16:57.673 { 00:16:57.673 "name": "pt2", 00:16:57.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.673 "is_configured": true, 00:16:57.673 "data_offset": 2048, 00:16:57.673 "data_size": 63488 00:16:57.673 }, 00:16:57.673 { 00:16:57.673 "name": "pt3", 00:16:57.673 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:57.673 "is_configured": true, 00:16:57.673 "data_offset": 2048, 00:16:57.673 "data_size": 63488 00:16:57.673 }, 00:16:57.673 { 00:16:57.673 "name": "pt4", 00:16:57.673 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:57.673 "is_configured": true, 00:16:57.673 "data_offset": 2048, 00:16:57.673 "data_size": 63488 00:16:57.673 } 00:16:57.673 ] 00:16:57.673 } 00:16:57.673 } 00:16:57.673 }' 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:57.673 pt2 00:16:57.673 pt3 00:16:57.673 pt4' 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:57.673 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:57.932 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:57.932 "name": "pt1", 00:16:57.932 "aliases": [ 00:16:57.932 "00000000-0000-0000-0000-000000000001" 00:16:57.932 ], 00:16:57.932 "product_name": "passthru", 00:16:57.932 "block_size": 512, 00:16:57.932 "num_blocks": 65536, 00:16:57.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.932 "assigned_rate_limits": { 00:16:57.932 "rw_ios_per_sec": 0, 00:16:57.932 "rw_mbytes_per_sec": 0, 00:16:57.932 "r_mbytes_per_sec": 0, 00:16:57.932 "w_mbytes_per_sec": 0 00:16:57.932 }, 00:16:57.932 "claimed": true, 00:16:57.932 "claim_type": "exclusive_write", 00:16:57.932 "zoned": false, 00:16:57.932 "supported_io_types": { 00:16:57.932 "read": true, 00:16:57.932 "write": true, 00:16:57.932 "unmap": true, 00:16:57.932 "flush": true, 00:16:57.932 "reset": true, 00:16:57.932 "nvme_admin": false, 00:16:57.932 "nvme_io": false, 00:16:57.932 "nvme_io_md": false, 00:16:57.932 "write_zeroes": true, 00:16:57.932 "zcopy": true, 00:16:57.932 "get_zone_info": false, 00:16:57.932 "zone_management": false, 00:16:57.932 "zone_append": false, 00:16:57.933 "compare": false, 00:16:57.933 "compare_and_write": false, 00:16:57.933 "abort": true, 00:16:57.933 "seek_hole": false, 00:16:57.933 "seek_data": false, 00:16:57.933 "copy": true, 00:16:57.933 "nvme_iov_md": false 00:16:57.933 }, 00:16:57.933 "memory_domains": [ 00:16:57.933 { 00:16:57.933 "dma_device_id": "system", 00:16:57.933 "dma_device_type": 1 00:16:57.933 }, 00:16:57.933 { 00:16:57.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.933 "dma_device_type": 2 00:16:57.933 } 00:16:57.933 ], 00:16:57.933 "driver_specific": { 00:16:57.933 "passthru": { 00:16:57.933 "name": "pt1", 00:16:57.933 "base_bdev_name": "malloc1" 00:16:57.933 } 00:16:57.933 } 00:16:57.933 }' 00:16:57.933 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:57.933 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:57.933 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:57.933 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:57.933 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.192 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:58.192 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.192 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.192 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:58.192 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.192 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.192 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:58.192 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:58.192 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:58.192 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:58.451 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:58.451 "name": "pt2", 00:16:58.451 "aliases": [ 00:16:58.451 "00000000-0000-0000-0000-000000000002" 00:16:58.451 ], 00:16:58.451 "product_name": "passthru", 00:16:58.451 "block_size": 512, 00:16:58.451 "num_blocks": 65536, 00:16:58.451 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.451 "assigned_rate_limits": { 00:16:58.451 "rw_ios_per_sec": 0, 00:16:58.451 "rw_mbytes_per_sec": 0, 00:16:58.451 "r_mbytes_per_sec": 0, 00:16:58.451 "w_mbytes_per_sec": 0 00:16:58.451 }, 00:16:58.451 "claimed": true, 00:16:58.451 "claim_type": "exclusive_write", 00:16:58.451 "zoned": false, 00:16:58.451 "supported_io_types": { 00:16:58.451 "read": true, 00:16:58.451 "write": true, 00:16:58.451 "unmap": true, 00:16:58.451 "flush": true, 00:16:58.451 "reset": true, 00:16:58.451 "nvme_admin": false, 00:16:58.451 "nvme_io": false, 00:16:58.451 "nvme_io_md": false, 00:16:58.451 "write_zeroes": true, 00:16:58.451 "zcopy": true, 00:16:58.451 "get_zone_info": false, 00:16:58.451 "zone_management": false, 00:16:58.451 "zone_append": false, 00:16:58.451 "compare": false, 00:16:58.451 "compare_and_write": false, 00:16:58.451 "abort": true, 00:16:58.451 "seek_hole": false, 00:16:58.451 "seek_data": false, 00:16:58.451 "copy": true, 00:16:58.451 "nvme_iov_md": false 00:16:58.451 }, 00:16:58.451 "memory_domains": [ 00:16:58.451 { 00:16:58.451 "dma_device_id": "system", 00:16:58.451 "dma_device_type": 1 00:16:58.451 }, 00:16:58.451 { 00:16:58.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.451 "dma_device_type": 2 00:16:58.451 } 00:16:58.451 ], 00:16:58.451 "driver_specific": { 00:16:58.451 "passthru": { 00:16:58.451 "name": "pt2", 00:16:58.451 "base_bdev_name": "malloc2" 00:16:58.451 } 00:16:58.451 } 00:16:58.451 }' 00:16:58.451 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.451 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.451 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:58.451 18:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.451 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.451 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:58.710 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.710 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.710 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:58.710 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.710 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:58.710 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:58.710 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:58.710 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:58.710 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:58.969 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:58.969 "name": "pt3", 00:16:58.969 "aliases": [ 00:16:58.969 "00000000-0000-0000-0000-000000000003" 00:16:58.969 ], 00:16:58.969 "product_name": "passthru", 00:16:58.969 "block_size": 512, 00:16:58.969 "num_blocks": 65536, 00:16:58.969 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:58.969 "assigned_rate_limits": { 00:16:58.969 "rw_ios_per_sec": 0, 00:16:58.969 "rw_mbytes_per_sec": 0, 00:16:58.969 "r_mbytes_per_sec": 0, 00:16:58.969 "w_mbytes_per_sec": 0 00:16:58.969 }, 00:16:58.969 "claimed": true, 00:16:58.969 "claim_type": "exclusive_write", 00:16:58.969 "zoned": false, 00:16:58.969 "supported_io_types": { 00:16:58.969 "read": true, 00:16:58.969 "write": true, 00:16:58.969 "unmap": true, 00:16:58.970 "flush": true, 00:16:58.970 "reset": true, 00:16:58.970 "nvme_admin": false, 00:16:58.970 "nvme_io": false, 00:16:58.970 "nvme_io_md": false, 00:16:58.970 "write_zeroes": true, 00:16:58.970 "zcopy": true, 00:16:58.970 "get_zone_info": false, 00:16:58.970 "zone_management": false, 00:16:58.970 "zone_append": false, 00:16:58.970 "compare": false, 00:16:58.970 "compare_and_write": false, 00:16:58.970 "abort": true, 00:16:58.970 "seek_hole": false, 00:16:58.970 "seek_data": false, 00:16:58.970 "copy": true, 00:16:58.970 "nvme_iov_md": false 00:16:58.970 }, 00:16:58.970 "memory_domains": [ 00:16:58.970 { 00:16:58.970 "dma_device_id": "system", 00:16:58.970 "dma_device_type": 1 00:16:58.970 }, 00:16:58.970 { 00:16:58.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.970 "dma_device_type": 2 00:16:58.970 } 00:16:58.970 ], 00:16:58.970 "driver_specific": { 00:16:58.970 "passthru": { 00:16:58.970 "name": "pt3", 00:16:58.970 "base_bdev_name": "malloc3" 00:16:58.970 } 00:16:58.970 } 00:16:58.970 }' 00:16:58.970 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.970 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.970 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:58.970 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.970 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.970 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:58.970 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.970 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.229 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.229 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.229 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.229 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.229 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.229 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:59.229 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:59.229 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:59.229 "name": "pt4", 00:16:59.229 "aliases": [ 00:16:59.229 "00000000-0000-0000-0000-000000000004" 00:16:59.229 ], 00:16:59.229 "product_name": "passthru", 00:16:59.229 "block_size": 512, 00:16:59.229 "num_blocks": 65536, 00:16:59.229 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:59.229 "assigned_rate_limits": { 00:16:59.229 "rw_ios_per_sec": 0, 00:16:59.229 "rw_mbytes_per_sec": 0, 00:16:59.229 "r_mbytes_per_sec": 0, 00:16:59.229 "w_mbytes_per_sec": 0 00:16:59.229 }, 00:16:59.229 "claimed": true, 00:16:59.229 "claim_type": "exclusive_write", 00:16:59.229 "zoned": false, 00:16:59.229 "supported_io_types": { 00:16:59.229 "read": true, 00:16:59.229 "write": true, 00:16:59.229 "unmap": true, 00:16:59.229 "flush": true, 00:16:59.229 "reset": true, 00:16:59.229 "nvme_admin": false, 00:16:59.229 "nvme_io": false, 00:16:59.229 "nvme_io_md": false, 00:16:59.229 "write_zeroes": true, 00:16:59.229 "zcopy": true, 00:16:59.229 "get_zone_info": false, 00:16:59.229 "zone_management": false, 00:16:59.229 "zone_append": false, 00:16:59.229 "compare": false, 00:16:59.229 "compare_and_write": false, 00:16:59.229 "abort": true, 00:16:59.229 "seek_hole": false, 00:16:59.229 "seek_data": false, 00:16:59.229 "copy": true, 00:16:59.229 "nvme_iov_md": false 00:16:59.229 }, 00:16:59.229 "memory_domains": [ 00:16:59.229 { 00:16:59.229 "dma_device_id": "system", 00:16:59.229 "dma_device_type": 1 00:16:59.229 }, 00:16:59.229 { 00:16:59.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.229 "dma_device_type": 2 00:16:59.229 } 00:16:59.229 ], 00:16:59.229 "driver_specific": { 00:16:59.229 "passthru": { 00:16:59.229 "name": "pt4", 00:16:59.229 "base_bdev_name": "malloc4" 00:16:59.229 } 00:16:59.229 } 00:16:59.229 }' 00:16:59.229 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.488 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.488 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:59.488 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.488 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.488 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:59.488 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.488 18:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.488 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.488 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.488 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.747 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.747 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:59.747 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:59.747 [2024-07-24 18:20:08.269127] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.747 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=62576cf0-5838-497c-b15c-257655a67c0b 00:16:59.747 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 62576cf0-5838-497c-b15c-257655a67c0b ']' 00:16:59.747 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:00.006 [2024-07-24 18:20:08.441381] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.006 [2024-07-24 18:20:08.441393] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.006 [2024-07-24 18:20:08.441431] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.006 [2024-07-24 18:20:08.441474] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.006 [2024-07-24 18:20:08.441482] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1950c70 name raid_bdev1, state offline 00:17:00.006 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.006 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:00.265 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:00.265 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:00.265 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:00.265 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:00.265 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:00.265 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:00.523 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:00.523 18:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:00.523 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:00.523 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:00.783 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:00.783 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:01.042 [2024-07-24 18:20:09.612387] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:01.042 [2024-07-24 18:20:09.613324] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:01.042 [2024-07-24 18:20:09.613354] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:01.042 [2024-07-24 18:20:09.613374] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:01.042 [2024-07-24 18:20:09.613407] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:01.042 [2024-07-24 18:20:09.613434] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:01.042 [2024-07-24 18:20:09.613448] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:01.042 [2024-07-24 18:20:09.613461] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:01.042 [2024-07-24 18:20:09.613473] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.042 [2024-07-24 18:20:09.613479] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1afc730 name raid_bdev1, state configuring 00:17:01.042 request: 00:17:01.042 { 00:17:01.042 "name": "raid_bdev1", 00:17:01.042 "raid_level": "concat", 00:17:01.042 "base_bdevs": [ 00:17:01.042 "malloc1", 00:17:01.042 "malloc2", 00:17:01.042 "malloc3", 00:17:01.042 "malloc4" 00:17:01.042 ], 00:17:01.042 "strip_size_kb": 64, 00:17:01.042 "superblock": false, 00:17:01.042 "method": "bdev_raid_create", 00:17:01.042 "req_id": 1 00:17:01.042 } 00:17:01.042 Got JSON-RPC error response 00:17:01.042 response: 00:17:01.042 { 00:17:01.042 "code": -17, 00:17:01.042 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:01.042 } 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.042 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:01.302 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:01.302 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:01.302 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.562 [2024-07-24 18:20:09.949216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.562 [2024-07-24 18:20:09.949242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.562 [2024-07-24 18:20:09.949255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1958ee0 00:17:01.562 [2024-07-24 18:20:09.949263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.562 [2024-07-24 18:20:09.950411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.562 [2024-07-24 18:20:09.950433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.562 [2024-07-24 18:20:09.950479] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:01.562 [2024-07-24 18:20:09.950497] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:01.562 pt1 00:17:01.562 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:01.562 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:01.562 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:01.562 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:01.562 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:01.562 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:01.562 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:01.562 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:01.562 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:01.562 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:01.562 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.562 18:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.562 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:01.562 "name": "raid_bdev1", 00:17:01.562 "uuid": "62576cf0-5838-497c-b15c-257655a67c0b", 00:17:01.562 "strip_size_kb": 64, 00:17:01.562 "state": "configuring", 00:17:01.562 "raid_level": "concat", 00:17:01.562 "superblock": true, 00:17:01.562 "num_base_bdevs": 4, 00:17:01.562 "num_base_bdevs_discovered": 1, 00:17:01.562 "num_base_bdevs_operational": 4, 00:17:01.562 "base_bdevs_list": [ 00:17:01.562 { 00:17:01.562 "name": "pt1", 00:17:01.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:01.562 "is_configured": true, 00:17:01.562 "data_offset": 2048, 00:17:01.562 "data_size": 63488 00:17:01.562 }, 00:17:01.562 { 00:17:01.562 "name": null, 00:17:01.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.562 "is_configured": false, 00:17:01.562 "data_offset": 2048, 00:17:01.562 "data_size": 63488 00:17:01.562 }, 00:17:01.562 { 00:17:01.562 "name": null, 00:17:01.562 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:01.562 "is_configured": false, 00:17:01.562 "data_offset": 2048, 00:17:01.562 "data_size": 63488 00:17:01.562 }, 00:17:01.562 { 00:17:01.562 "name": null, 00:17:01.562 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:01.562 "is_configured": false, 00:17:01.562 "data_offset": 2048, 00:17:01.562 "data_size": 63488 00:17:01.562 } 00:17:01.562 ] 00:17:01.562 }' 00:17:01.562 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:01.562 18:20:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.131 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:17:02.131 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:02.390 [2024-07-24 18:20:10.751294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:02.390 [2024-07-24 18:20:10.751327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.390 [2024-07-24 18:20:10.751339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19505e0 00:17:02.390 [2024-07-24 18:20:10.751347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.390 [2024-07-24 18:20:10.751583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.390 [2024-07-24 18:20:10.751596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:02.390 [2024-07-24 18:20:10.751644] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:02.390 [2024-07-24 18:20:10.751658] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.390 pt2 00:17:02.390 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:02.390 [2024-07-24 18:20:10.919754] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:02.390 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:02.390 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:02.390 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:02.390 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:02.390 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:02.390 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:02.390 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.390 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.390 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.390 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.390 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.390 18:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.650 18:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.650 "name": "raid_bdev1", 00:17:02.650 "uuid": "62576cf0-5838-497c-b15c-257655a67c0b", 00:17:02.650 "strip_size_kb": 64, 00:17:02.650 "state": "configuring", 00:17:02.650 "raid_level": "concat", 00:17:02.650 "superblock": true, 00:17:02.650 "num_base_bdevs": 4, 00:17:02.650 "num_base_bdevs_discovered": 1, 00:17:02.650 "num_base_bdevs_operational": 4, 00:17:02.650 "base_bdevs_list": [ 00:17:02.650 { 00:17:02.650 "name": "pt1", 00:17:02.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.650 "is_configured": true, 00:17:02.650 "data_offset": 2048, 00:17:02.650 "data_size": 63488 00:17:02.650 }, 00:17:02.650 { 00:17:02.650 "name": null, 00:17:02.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.650 "is_configured": false, 00:17:02.650 "data_offset": 2048, 00:17:02.650 "data_size": 63488 00:17:02.650 }, 00:17:02.650 { 00:17:02.650 "name": null, 00:17:02.650 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:02.650 "is_configured": false, 00:17:02.650 "data_offset": 2048, 00:17:02.650 "data_size": 63488 00:17:02.650 }, 00:17:02.650 { 00:17:02.650 "name": null, 00:17:02.650 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:02.650 "is_configured": false, 00:17:02.650 "data_offset": 2048, 00:17:02.650 "data_size": 63488 00:17:02.650 } 00:17:02.650 ] 00:17:02.650 }' 00:17:02.650 18:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.650 18:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.219 18:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:03.219 18:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:03.219 18:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:03.219 [2024-07-24 18:20:11.769952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:03.219 [2024-07-24 18:20:11.769988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.219 [2024-07-24 18:20:11.770000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x194f600 00:17:03.219 [2024-07-24 18:20:11.770008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.219 [2024-07-24 18:20:11.770247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.219 [2024-07-24 18:20:11.770261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:03.219 [2024-07-24 18:20:11.770307] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:03.219 [2024-07-24 18:20:11.770321] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:03.219 pt2 00:17:03.219 18:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:03.219 18:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:03.219 18:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:03.479 [2024-07-24 18:20:11.942394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:03.479 [2024-07-24 18:20:11.942419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.479 [2024-07-24 18:20:11.942429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x194f830 00:17:03.479 [2024-07-24 18:20:11.942437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.479 [2024-07-24 18:20:11.942640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.479 [2024-07-24 18:20:11.942652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:03.479 [2024-07-24 18:20:11.942687] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:03.479 [2024-07-24 18:20:11.942699] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:03.479 pt3 00:17:03.479 18:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:03.479 18:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:03.479 18:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:03.738 [2024-07-24 18:20:12.098798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:03.738 [2024-07-24 18:20:12.098820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.738 [2024-07-24 18:20:12.098830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1958230 00:17:03.738 [2024-07-24 18:20:12.098837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.738 [2024-07-24 18:20:12.099007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.738 [2024-07-24 18:20:12.099018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:03.738 [2024-07-24 18:20:12.099048] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:03.738 [2024-07-24 18:20:12.099058] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:03.738 [2024-07-24 18:20:12.099131] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1952030 00:17:03.738 [2024-07-24 18:20:12.099138] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:03.738 [2024-07-24 18:20:12.099238] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1951890 00:17:03.738 [2024-07-24 18:20:12.099323] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1952030 00:17:03.738 [2024-07-24 18:20:12.099329] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1952030 00:17:03.738 [2024-07-24 18:20:12.099389] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.738 pt4 00:17:03.738 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:03.738 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:03.738 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:03.738 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:03.738 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:03.739 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:03.739 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:03.739 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:03.739 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.739 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.739 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.739 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.739 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.739 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.739 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.739 "name": "raid_bdev1", 00:17:03.739 "uuid": "62576cf0-5838-497c-b15c-257655a67c0b", 00:17:03.739 "strip_size_kb": 64, 00:17:03.739 "state": "online", 00:17:03.739 "raid_level": "concat", 00:17:03.739 "superblock": true, 00:17:03.739 "num_base_bdevs": 4, 00:17:03.739 "num_base_bdevs_discovered": 4, 00:17:03.739 "num_base_bdevs_operational": 4, 00:17:03.739 "base_bdevs_list": [ 00:17:03.739 { 00:17:03.739 "name": "pt1", 00:17:03.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.739 "is_configured": true, 00:17:03.739 "data_offset": 2048, 00:17:03.739 "data_size": 63488 00:17:03.739 }, 00:17:03.739 { 00:17:03.739 "name": "pt2", 00:17:03.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.739 "is_configured": true, 00:17:03.739 "data_offset": 2048, 00:17:03.739 "data_size": 63488 00:17:03.739 }, 00:17:03.739 { 00:17:03.739 "name": "pt3", 00:17:03.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:03.739 "is_configured": true, 00:17:03.739 "data_offset": 2048, 00:17:03.739 "data_size": 63488 00:17:03.739 }, 00:17:03.739 { 00:17:03.739 "name": "pt4", 00:17:03.739 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:03.739 "is_configured": true, 00:17:03.739 "data_offset": 2048, 00:17:03.739 "data_size": 63488 00:17:03.739 } 00:17:03.739 ] 00:17:03.739 }' 00:17:03.739 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.739 18:20:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.311 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:04.311 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:04.311 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:04.311 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:04.311 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:04.311 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:04.311 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:04.311 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:04.571 [2024-07-24 18:20:12.941186] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.571 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:04.571 "name": "raid_bdev1", 00:17:04.571 "aliases": [ 00:17:04.571 "62576cf0-5838-497c-b15c-257655a67c0b" 00:17:04.571 ], 00:17:04.571 "product_name": "Raid Volume", 00:17:04.571 "block_size": 512, 00:17:04.571 "num_blocks": 253952, 00:17:04.571 "uuid": "62576cf0-5838-497c-b15c-257655a67c0b", 00:17:04.571 "assigned_rate_limits": { 00:17:04.571 "rw_ios_per_sec": 0, 00:17:04.571 "rw_mbytes_per_sec": 0, 00:17:04.571 "r_mbytes_per_sec": 0, 00:17:04.571 "w_mbytes_per_sec": 0 00:17:04.571 }, 00:17:04.571 "claimed": false, 00:17:04.571 "zoned": false, 00:17:04.571 "supported_io_types": { 00:17:04.571 "read": true, 00:17:04.571 "write": true, 00:17:04.571 "unmap": true, 00:17:04.571 "flush": true, 00:17:04.571 "reset": true, 00:17:04.571 "nvme_admin": false, 00:17:04.571 "nvme_io": false, 00:17:04.571 "nvme_io_md": false, 00:17:04.571 "write_zeroes": true, 00:17:04.571 "zcopy": false, 00:17:04.571 "get_zone_info": false, 00:17:04.571 "zone_management": false, 00:17:04.571 "zone_append": false, 00:17:04.571 "compare": false, 00:17:04.571 "compare_and_write": false, 00:17:04.571 "abort": false, 00:17:04.571 "seek_hole": false, 00:17:04.571 "seek_data": false, 00:17:04.571 "copy": false, 00:17:04.571 "nvme_iov_md": false 00:17:04.571 }, 00:17:04.571 "memory_domains": [ 00:17:04.571 { 00:17:04.571 "dma_device_id": "system", 00:17:04.571 "dma_device_type": 1 00:17:04.571 }, 00:17:04.571 { 00:17:04.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.571 "dma_device_type": 2 00:17:04.571 }, 00:17:04.571 { 00:17:04.571 "dma_device_id": "system", 00:17:04.571 "dma_device_type": 1 00:17:04.571 }, 00:17:04.571 { 00:17:04.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.571 "dma_device_type": 2 00:17:04.571 }, 00:17:04.571 { 00:17:04.571 "dma_device_id": "system", 00:17:04.571 "dma_device_type": 1 00:17:04.571 }, 00:17:04.571 { 00:17:04.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.571 "dma_device_type": 2 00:17:04.571 }, 00:17:04.571 { 00:17:04.571 "dma_device_id": "system", 00:17:04.571 "dma_device_type": 1 00:17:04.571 }, 00:17:04.571 { 00:17:04.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.571 "dma_device_type": 2 00:17:04.571 } 00:17:04.571 ], 00:17:04.571 "driver_specific": { 00:17:04.571 "raid": { 00:17:04.571 "uuid": "62576cf0-5838-497c-b15c-257655a67c0b", 00:17:04.571 "strip_size_kb": 64, 00:17:04.571 "state": "online", 00:17:04.571 "raid_level": "concat", 00:17:04.571 "superblock": true, 00:17:04.571 "num_base_bdevs": 4, 00:17:04.571 "num_base_bdevs_discovered": 4, 00:17:04.571 "num_base_bdevs_operational": 4, 00:17:04.571 "base_bdevs_list": [ 00:17:04.571 { 00:17:04.571 "name": "pt1", 00:17:04.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.571 "is_configured": true, 00:17:04.571 "data_offset": 2048, 00:17:04.571 "data_size": 63488 00:17:04.571 }, 00:17:04.571 { 00:17:04.571 "name": "pt2", 00:17:04.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.571 "is_configured": true, 00:17:04.571 "data_offset": 2048, 00:17:04.571 "data_size": 63488 00:17:04.571 }, 00:17:04.571 { 00:17:04.571 "name": "pt3", 00:17:04.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:04.571 "is_configured": true, 00:17:04.571 "data_offset": 2048, 00:17:04.571 "data_size": 63488 00:17:04.571 }, 00:17:04.571 { 00:17:04.571 "name": "pt4", 00:17:04.571 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:04.571 "is_configured": true, 00:17:04.571 "data_offset": 2048, 00:17:04.571 "data_size": 63488 00:17:04.571 } 00:17:04.571 ] 00:17:04.571 } 00:17:04.571 } 00:17:04.571 }' 00:17:04.571 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:04.571 18:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:04.571 pt2 00:17:04.571 pt3 00:17:04.571 pt4' 00:17:04.571 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:04.571 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:04.571 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:04.831 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:04.831 "name": "pt1", 00:17:04.831 "aliases": [ 00:17:04.831 "00000000-0000-0000-0000-000000000001" 00:17:04.831 ], 00:17:04.831 "product_name": "passthru", 00:17:04.831 "block_size": 512, 00:17:04.831 "num_blocks": 65536, 00:17:04.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.832 "assigned_rate_limits": { 00:17:04.832 "rw_ios_per_sec": 0, 00:17:04.832 "rw_mbytes_per_sec": 0, 00:17:04.832 "r_mbytes_per_sec": 0, 00:17:04.832 "w_mbytes_per_sec": 0 00:17:04.832 }, 00:17:04.832 "claimed": true, 00:17:04.832 "claim_type": "exclusive_write", 00:17:04.832 "zoned": false, 00:17:04.832 "supported_io_types": { 00:17:04.832 "read": true, 00:17:04.832 "write": true, 00:17:04.832 "unmap": true, 00:17:04.832 "flush": true, 00:17:04.832 "reset": true, 00:17:04.832 "nvme_admin": false, 00:17:04.832 "nvme_io": false, 00:17:04.832 "nvme_io_md": false, 00:17:04.832 "write_zeroes": true, 00:17:04.832 "zcopy": true, 00:17:04.832 "get_zone_info": false, 00:17:04.832 "zone_management": false, 00:17:04.832 "zone_append": false, 00:17:04.832 "compare": false, 00:17:04.832 "compare_and_write": false, 00:17:04.832 "abort": true, 00:17:04.832 "seek_hole": false, 00:17:04.832 "seek_data": false, 00:17:04.832 "copy": true, 00:17:04.832 "nvme_iov_md": false 00:17:04.832 }, 00:17:04.832 "memory_domains": [ 00:17:04.832 { 00:17:04.832 "dma_device_id": "system", 00:17:04.832 "dma_device_type": 1 00:17:04.832 }, 00:17:04.832 { 00:17:04.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.832 "dma_device_type": 2 00:17:04.832 } 00:17:04.832 ], 00:17:04.832 "driver_specific": { 00:17:04.832 "passthru": { 00:17:04.832 "name": "pt1", 00:17:04.832 "base_bdev_name": "malloc1" 00:17:04.832 } 00:17:04.832 } 00:17:04.832 }' 00:17:04.832 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.832 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.832 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:04.832 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.832 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.832 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:04.832 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.832 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.832 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:04.832 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.093 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.093 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:05.093 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:05.093 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:05.093 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:05.093 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:05.093 "name": "pt2", 00:17:05.093 "aliases": [ 00:17:05.093 "00000000-0000-0000-0000-000000000002" 00:17:05.093 ], 00:17:05.093 "product_name": "passthru", 00:17:05.093 "block_size": 512, 00:17:05.093 "num_blocks": 65536, 00:17:05.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.093 "assigned_rate_limits": { 00:17:05.093 "rw_ios_per_sec": 0, 00:17:05.093 "rw_mbytes_per_sec": 0, 00:17:05.093 "r_mbytes_per_sec": 0, 00:17:05.093 "w_mbytes_per_sec": 0 00:17:05.093 }, 00:17:05.093 "claimed": true, 00:17:05.093 "claim_type": "exclusive_write", 00:17:05.093 "zoned": false, 00:17:05.093 "supported_io_types": { 00:17:05.093 "read": true, 00:17:05.093 "write": true, 00:17:05.093 "unmap": true, 00:17:05.093 "flush": true, 00:17:05.093 "reset": true, 00:17:05.093 "nvme_admin": false, 00:17:05.093 "nvme_io": false, 00:17:05.093 "nvme_io_md": false, 00:17:05.093 "write_zeroes": true, 00:17:05.093 "zcopy": true, 00:17:05.093 "get_zone_info": false, 00:17:05.093 "zone_management": false, 00:17:05.093 "zone_append": false, 00:17:05.093 "compare": false, 00:17:05.093 "compare_and_write": false, 00:17:05.093 "abort": true, 00:17:05.093 "seek_hole": false, 00:17:05.093 "seek_data": false, 00:17:05.093 "copy": true, 00:17:05.093 "nvme_iov_md": false 00:17:05.093 }, 00:17:05.093 "memory_domains": [ 00:17:05.093 { 00:17:05.093 "dma_device_id": "system", 00:17:05.093 "dma_device_type": 1 00:17:05.093 }, 00:17:05.093 { 00:17:05.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.093 "dma_device_type": 2 00:17:05.093 } 00:17:05.093 ], 00:17:05.093 "driver_specific": { 00:17:05.093 "passthru": { 00:17:05.093 "name": "pt2", 00:17:05.093 "base_bdev_name": "malloc2" 00:17:05.093 } 00:17:05.093 } 00:17:05.093 }' 00:17:05.093 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:05.352 18:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:05.612 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:05.612 "name": "pt3", 00:17:05.612 "aliases": [ 00:17:05.612 "00000000-0000-0000-0000-000000000003" 00:17:05.612 ], 00:17:05.612 "product_name": "passthru", 00:17:05.612 "block_size": 512, 00:17:05.612 "num_blocks": 65536, 00:17:05.612 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:05.612 "assigned_rate_limits": { 00:17:05.612 "rw_ios_per_sec": 0, 00:17:05.612 "rw_mbytes_per_sec": 0, 00:17:05.612 "r_mbytes_per_sec": 0, 00:17:05.612 "w_mbytes_per_sec": 0 00:17:05.612 }, 00:17:05.612 "claimed": true, 00:17:05.612 "claim_type": "exclusive_write", 00:17:05.612 "zoned": false, 00:17:05.612 "supported_io_types": { 00:17:05.612 "read": true, 00:17:05.612 "write": true, 00:17:05.612 "unmap": true, 00:17:05.612 "flush": true, 00:17:05.612 "reset": true, 00:17:05.612 "nvme_admin": false, 00:17:05.612 "nvme_io": false, 00:17:05.612 "nvme_io_md": false, 00:17:05.612 "write_zeroes": true, 00:17:05.612 "zcopy": true, 00:17:05.612 "get_zone_info": false, 00:17:05.612 "zone_management": false, 00:17:05.612 "zone_append": false, 00:17:05.612 "compare": false, 00:17:05.612 "compare_and_write": false, 00:17:05.612 "abort": true, 00:17:05.612 "seek_hole": false, 00:17:05.612 "seek_data": false, 00:17:05.612 "copy": true, 00:17:05.612 "nvme_iov_md": false 00:17:05.612 }, 00:17:05.612 "memory_domains": [ 00:17:05.612 { 00:17:05.612 "dma_device_id": "system", 00:17:05.612 "dma_device_type": 1 00:17:05.612 }, 00:17:05.612 { 00:17:05.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.612 "dma_device_type": 2 00:17:05.612 } 00:17:05.612 ], 00:17:05.612 "driver_specific": { 00:17:05.612 "passthru": { 00:17:05.612 "name": "pt3", 00:17:05.612 "base_bdev_name": "malloc3" 00:17:05.612 } 00:17:05.612 } 00:17:05.612 }' 00:17:05.612 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.612 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.612 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:05.612 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.872 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.872 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:05.872 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.872 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.872 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:05.872 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.872 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.872 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:05.872 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:05.872 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:05.872 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:06.132 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:06.132 "name": "pt4", 00:17:06.132 "aliases": [ 00:17:06.132 "00000000-0000-0000-0000-000000000004" 00:17:06.132 ], 00:17:06.132 "product_name": "passthru", 00:17:06.132 "block_size": 512, 00:17:06.132 "num_blocks": 65536, 00:17:06.132 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:06.132 "assigned_rate_limits": { 00:17:06.132 "rw_ios_per_sec": 0, 00:17:06.132 "rw_mbytes_per_sec": 0, 00:17:06.132 "r_mbytes_per_sec": 0, 00:17:06.132 "w_mbytes_per_sec": 0 00:17:06.132 }, 00:17:06.132 "claimed": true, 00:17:06.132 "claim_type": "exclusive_write", 00:17:06.132 "zoned": false, 00:17:06.132 "supported_io_types": { 00:17:06.132 "read": true, 00:17:06.132 "write": true, 00:17:06.132 "unmap": true, 00:17:06.132 "flush": true, 00:17:06.132 "reset": true, 00:17:06.132 "nvme_admin": false, 00:17:06.132 "nvme_io": false, 00:17:06.132 "nvme_io_md": false, 00:17:06.132 "write_zeroes": true, 00:17:06.132 "zcopy": true, 00:17:06.132 "get_zone_info": false, 00:17:06.132 "zone_management": false, 00:17:06.132 "zone_append": false, 00:17:06.132 "compare": false, 00:17:06.132 "compare_and_write": false, 00:17:06.132 "abort": true, 00:17:06.132 "seek_hole": false, 00:17:06.132 "seek_data": false, 00:17:06.132 "copy": true, 00:17:06.132 "nvme_iov_md": false 00:17:06.132 }, 00:17:06.132 "memory_domains": [ 00:17:06.132 { 00:17:06.132 "dma_device_id": "system", 00:17:06.132 "dma_device_type": 1 00:17:06.132 }, 00:17:06.132 { 00:17:06.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.132 "dma_device_type": 2 00:17:06.132 } 00:17:06.132 ], 00:17:06.132 "driver_specific": { 00:17:06.132 "passthru": { 00:17:06.132 "name": "pt4", 00:17:06.132 "base_bdev_name": "malloc4" 00:17:06.132 } 00:17:06.132 } 00:17:06.132 }' 00:17:06.132 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.132 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.132 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:06.132 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.132 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.132 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:06.132 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.391 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.391 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:06.391 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.391 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.391 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:06.391 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:06.391 18:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:06.651 [2024-07-24 18:20:15.026562] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 62576cf0-5838-497c-b15c-257655a67c0b '!=' 62576cf0-5838-497c-b15c-257655a67c0b ']' 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2236213 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2236213 ']' 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2236213 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2236213 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2236213' 00:17:06.651 killing process with pid 2236213 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2236213 00:17:06.651 [2024-07-24 18:20:15.098431] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:06.651 [2024-07-24 18:20:15.098480] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.651 [2024-07-24 18:20:15.098523] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.651 [2024-07-24 18:20:15.098532] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1952030 name raid_bdev1, state offline 00:17:06.651 18:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2236213 00:17:06.651 [2024-07-24 18:20:15.128023] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.911 18:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:06.911 00:17:06.911 real 0m12.296s 00:17:06.911 user 0m22.018s 00:17:06.911 sys 0m2.328s 00:17:06.911 18:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:06.911 18:20:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.911 ************************************ 00:17:06.911 END TEST raid_superblock_test 00:17:06.911 ************************************ 00:17:06.911 18:20:15 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:17:06.911 18:20:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:06.911 18:20:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:06.911 18:20:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.911 ************************************ 00:17:06.911 START TEST raid_read_error_test 00:17:06.911 ************************************ 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:06.911 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.zxTEXPLOTu 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2238683 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2238683 /var/tmp/spdk-raid.sock 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2238683 ']' 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:06.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.912 18:20:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.912 [2024-07-24 18:20:15.452417] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:17:06.912 [2024-07-24 18:20:15.452461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2238683 ] 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:01.0 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:01.1 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:01.2 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:01.3 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:01.4 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:01.5 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:01.6 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:01.7 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:02.0 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:02.1 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:02.2 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:02.3 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:02.4 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:02.5 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:02.6 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b3:02.7 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:01.0 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:01.1 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:01.2 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:01.3 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:01.4 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:01.5 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:01.6 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:01.7 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:02.0 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:02.1 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:02.2 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:02.3 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:02.4 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:02.5 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:02.6 cannot be used 00:17:06.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.912 EAL: Requested device 0000:b5:02.7 cannot be used 00:17:07.172 [2024-07-24 18:20:15.544507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.172 [2024-07-24 18:20:15.617484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.172 [2024-07-24 18:20:15.667113] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.172 [2024-07-24 18:20:15.667142] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.741 18:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.741 18:20:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:07.742 18:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:07.742 18:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:08.002 BaseBdev1_malloc 00:17:08.002 18:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:08.002 true 00:17:08.002 18:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:08.262 [2024-07-24 18:20:16.731504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:08.262 [2024-07-24 18:20:16.731538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.262 [2024-07-24 18:20:16.731551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc02ed0 00:17:08.262 [2024-07-24 18:20:16.731560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.262 [2024-07-24 18:20:16.732739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.262 [2024-07-24 18:20:16.732763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:08.262 BaseBdev1 00:17:08.262 18:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:08.262 18:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:08.521 BaseBdev2_malloc 00:17:08.521 18:20:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:08.521 true 00:17:08.521 18:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:08.781 [2024-07-24 18:20:17.228272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:08.781 [2024-07-24 18:20:17.228306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.781 [2024-07-24 18:20:17.228320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc07b60 00:17:08.781 [2024-07-24 18:20:17.228328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.781 [2024-07-24 18:20:17.229380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.781 [2024-07-24 18:20:17.229403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:08.781 BaseBdev2 00:17:08.781 18:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:08.781 18:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:09.041 BaseBdev3_malloc 00:17:09.041 18:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:09.041 true 00:17:09.041 18:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:09.302 [2024-07-24 18:20:17.733152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:09.302 [2024-07-24 18:20:17.733183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.302 [2024-07-24 18:20:17.733198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc08ad0 00:17:09.302 [2024-07-24 18:20:17.733206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.302 [2024-07-24 18:20:17.734144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.302 [2024-07-24 18:20:17.734167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:09.302 BaseBdev3 00:17:09.302 18:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:09.302 18:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:09.564 BaseBdev4_malloc 00:17:09.564 18:20:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:09.564 true 00:17:09.564 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:09.850 [2024-07-24 18:20:18.238150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:09.850 [2024-07-24 18:20:18.238184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.850 [2024-07-24 18:20:18.238197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc0ad40 00:17:09.850 [2024-07-24 18:20:18.238205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.850 [2024-07-24 18:20:18.239236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.850 [2024-07-24 18:20:18.239257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:09.850 BaseBdev4 00:17:09.850 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:09.850 [2024-07-24 18:20:18.406611] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.850 [2024-07-24 18:20:18.407442] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.850 [2024-07-24 18:20:18.407488] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:09.850 [2024-07-24 18:20:18.407524] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:09.850 [2024-07-24 18:20:18.407698] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xc0bb10 00:17:09.850 [2024-07-24 18:20:18.407706] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:09.850 [2024-07-24 18:20:18.407838] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc0cdc0 00:17:09.850 [2024-07-24 18:20:18.407936] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc0bb10 00:17:09.850 [2024-07-24 18:20:18.407943] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc0bb10 00:17:09.850 [2024-07-24 18:20:18.408008] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.850 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:09.850 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:09.850 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:09.850 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:09.850 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:09.850 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:09.850 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.850 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.850 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.850 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.850 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.850 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.115 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.115 "name": "raid_bdev1", 00:17:10.115 "uuid": "6ab558f6-3156-422e-84bd-1400441d00d0", 00:17:10.115 "strip_size_kb": 64, 00:17:10.115 "state": "online", 00:17:10.115 "raid_level": "concat", 00:17:10.115 "superblock": true, 00:17:10.115 "num_base_bdevs": 4, 00:17:10.115 "num_base_bdevs_discovered": 4, 00:17:10.115 "num_base_bdevs_operational": 4, 00:17:10.115 "base_bdevs_list": [ 00:17:10.115 { 00:17:10.115 "name": "BaseBdev1", 00:17:10.115 "uuid": "cef04d62-095b-5df1-af66-790b4767fb4a", 00:17:10.115 "is_configured": true, 00:17:10.115 "data_offset": 2048, 00:17:10.115 "data_size": 63488 00:17:10.115 }, 00:17:10.115 { 00:17:10.115 "name": "BaseBdev2", 00:17:10.115 "uuid": "54708abf-25e7-5092-9070-5e12c59239c2", 00:17:10.115 "is_configured": true, 00:17:10.115 "data_offset": 2048, 00:17:10.115 "data_size": 63488 00:17:10.115 }, 00:17:10.115 { 00:17:10.115 "name": "BaseBdev3", 00:17:10.115 "uuid": "9f2bc0af-b9f1-5a6e-a876-d124bade2c56", 00:17:10.115 "is_configured": true, 00:17:10.115 "data_offset": 2048, 00:17:10.115 "data_size": 63488 00:17:10.115 }, 00:17:10.115 { 00:17:10.115 "name": "BaseBdev4", 00:17:10.115 "uuid": "8429520c-96c2-5624-9653-d8888f093090", 00:17:10.115 "is_configured": true, 00:17:10.115 "data_offset": 2048, 00:17:10.115 "data_size": 63488 00:17:10.115 } 00:17:10.115 ] 00:17:10.115 }' 00:17:10.115 18:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.115 18:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.684 18:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:10.684 18:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:10.684 [2024-07-24 18:20:19.108612] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc109b0 00:17:11.623 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:11.883 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:11.883 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:11.883 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:17:11.883 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:11.883 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:11.883 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:11.883 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:11.883 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.884 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:11.884 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.884 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.884 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.884 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.884 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.884 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.884 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.884 "name": "raid_bdev1", 00:17:11.884 "uuid": "6ab558f6-3156-422e-84bd-1400441d00d0", 00:17:11.884 "strip_size_kb": 64, 00:17:11.884 "state": "online", 00:17:11.884 "raid_level": "concat", 00:17:11.884 "superblock": true, 00:17:11.884 "num_base_bdevs": 4, 00:17:11.884 "num_base_bdevs_discovered": 4, 00:17:11.884 "num_base_bdevs_operational": 4, 00:17:11.884 "base_bdevs_list": [ 00:17:11.884 { 00:17:11.884 "name": "BaseBdev1", 00:17:11.884 "uuid": "cef04d62-095b-5df1-af66-790b4767fb4a", 00:17:11.884 "is_configured": true, 00:17:11.884 "data_offset": 2048, 00:17:11.884 "data_size": 63488 00:17:11.884 }, 00:17:11.884 { 00:17:11.884 "name": "BaseBdev2", 00:17:11.884 "uuid": "54708abf-25e7-5092-9070-5e12c59239c2", 00:17:11.884 "is_configured": true, 00:17:11.884 "data_offset": 2048, 00:17:11.884 "data_size": 63488 00:17:11.884 }, 00:17:11.884 { 00:17:11.884 "name": "BaseBdev3", 00:17:11.884 "uuid": "9f2bc0af-b9f1-5a6e-a876-d124bade2c56", 00:17:11.884 "is_configured": true, 00:17:11.884 "data_offset": 2048, 00:17:11.884 "data_size": 63488 00:17:11.884 }, 00:17:11.884 { 00:17:11.884 "name": "BaseBdev4", 00:17:11.884 "uuid": "8429520c-96c2-5624-9653-d8888f093090", 00:17:11.884 "is_configured": true, 00:17:11.884 "data_offset": 2048, 00:17:11.884 "data_size": 63488 00:17:11.884 } 00:17:11.884 ] 00:17:11.884 }' 00:17:11.884 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.884 18:20:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.455 18:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:12.455 [2024-07-24 18:20:21.044695] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.455 [2024-07-24 18:20:21.044721] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.455 [2024-07-24 18:20:21.046768] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.455 [2024-07-24 18:20:21.046795] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.455 [2024-07-24 18:20:21.046821] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.455 [2024-07-24 18:20:21.046828] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc0bb10 name raid_bdev1, state offline 00:17:12.455 0 00:17:12.715 18:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2238683 00:17:12.715 18:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2238683 ']' 00:17:12.715 18:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2238683 00:17:12.715 18:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:17:12.715 18:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.715 18:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2238683 00:17:12.715 18:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:12.715 18:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:12.715 18:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2238683' 00:17:12.715 killing process with pid 2238683 00:17:12.715 18:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2238683 00:17:12.715 [2024-07-24 18:20:21.106851] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:12.715 18:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2238683 00:17:12.715 [2024-07-24 18:20:21.131705] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:12.976 18:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.zxTEXPLOTu 00:17:12.976 18:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:12.976 18:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:12.976 18:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:17:12.976 18:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:12.976 18:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:12.976 18:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:12.976 18:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:17:12.976 00:17:12.976 real 0m5.937s 00:17:12.976 user 0m9.128s 00:17:12.976 sys 0m1.065s 00:17:12.976 18:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:12.976 18:20:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.976 ************************************ 00:17:12.976 END TEST raid_read_error_test 00:17:12.976 ************************************ 00:17:12.976 18:20:21 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:17:12.976 18:20:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:12.976 18:20:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:12.976 18:20:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.976 ************************************ 00:17:12.976 START TEST raid_write_error_test 00:17:12.976 ************************************ 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:12.976 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:12.977 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.phaWDAlm21 00:17:12.977 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2239851 00:17:12.977 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2239851 /var/tmp/spdk-raid.sock 00:17:12.977 18:20:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:12.977 18:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2239851 ']' 00:17:12.977 18:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:12.977 18:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.977 18:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:12.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:12.977 18:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.977 18:20:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.977 [2024-07-24 18:20:21.479417] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:17:12.977 [2024-07-24 18:20:21.479464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2239851 ] 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:01.0 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:01.1 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:01.2 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:01.3 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:01.4 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:01.5 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:01.6 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:01.7 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:02.0 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:02.1 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:02.2 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:02.3 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:02.4 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:02.5 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:02.6 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b3:02.7 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:01.0 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:01.1 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:01.2 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:01.3 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:01.4 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:01.5 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:01.6 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:01.7 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:02.0 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:02.1 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:02.2 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:02.3 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:02.4 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:02.5 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:02.6 cannot be used 00:17:12.977 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:12.977 EAL: Requested device 0000:b5:02.7 cannot be used 00:17:13.237 [2024-07-24 18:20:21.573012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.237 [2024-07-24 18:20:21.646906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.237 [2024-07-24 18:20:21.697199] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.237 [2024-07-24 18:20:21.697228] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.807 18:20:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.807 18:20:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:13.807 18:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:13.807 18:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:14.067 BaseBdev1_malloc 00:17:14.067 18:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:14.067 true 00:17:14.067 18:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:14.328 [2024-07-24 18:20:22.761936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:14.328 [2024-07-24 18:20:22.761969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.328 [2024-07-24 18:20:22.761982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1567ed0 00:17:14.328 [2024-07-24 18:20:22.762001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.328 [2024-07-24 18:20:22.763100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.328 [2024-07-24 18:20:22.763124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:14.328 BaseBdev1 00:17:14.328 18:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:14.328 18:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:14.328 BaseBdev2_malloc 00:17:14.587 18:20:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:14.587 true 00:17:14.587 18:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:14.846 [2024-07-24 18:20:23.246775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:14.846 [2024-07-24 18:20:23.246809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.846 [2024-07-24 18:20:23.246822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x156cb60 00:17:14.846 [2024-07-24 18:20:23.246830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.846 [2024-07-24 18:20:23.247882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.846 [2024-07-24 18:20:23.247903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:14.846 BaseBdev2 00:17:14.846 18:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:14.846 18:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:14.846 BaseBdev3_malloc 00:17:14.846 18:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:15.104 true 00:17:15.104 18:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:15.363 [2024-07-24 18:20:23.743664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:15.363 [2024-07-24 18:20:23.743695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.363 [2024-07-24 18:20:23.743710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x156dad0 00:17:15.364 [2024-07-24 18:20:23.743719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.364 [2024-07-24 18:20:23.744734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.364 [2024-07-24 18:20:23.744757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:15.364 BaseBdev3 00:17:15.364 18:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:15.364 18:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:15.364 BaseBdev4_malloc 00:17:15.364 18:20:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:15.623 true 00:17:15.623 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:15.883 [2024-07-24 18:20:24.256718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:15.883 [2024-07-24 18:20:24.256751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.883 [2024-07-24 18:20:24.256765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x156fd40 00:17:15.883 [2024-07-24 18:20:24.256774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.883 [2024-07-24 18:20:24.257809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.883 [2024-07-24 18:20:24.257831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:15.883 BaseBdev4 00:17:15.883 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:15.883 [2024-07-24 18:20:24.413141] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:15.883 [2024-07-24 18:20:24.413965] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.883 [2024-07-24 18:20:24.414012] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:15.883 [2024-07-24 18:20:24.414054] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:15.883 [2024-07-24 18:20:24.414210] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1570b10 00:17:15.883 [2024-07-24 18:20:24.414218] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:15.883 [2024-07-24 18:20:24.414343] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1571dc0 00:17:15.883 [2024-07-24 18:20:24.414442] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1570b10 00:17:15.883 [2024-07-24 18:20:24.414449] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1570b10 00:17:15.883 [2024-07-24 18:20:24.414515] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.883 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:15.883 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:15.883 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:15.883 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:15.883 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:15.883 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:15.883 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.883 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.883 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.883 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.883 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.883 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.143 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.143 "name": "raid_bdev1", 00:17:16.143 "uuid": "8663bac4-84c3-4f30-ac85-0c9399ad5452", 00:17:16.143 "strip_size_kb": 64, 00:17:16.143 "state": "online", 00:17:16.143 "raid_level": "concat", 00:17:16.143 "superblock": true, 00:17:16.143 "num_base_bdevs": 4, 00:17:16.143 "num_base_bdevs_discovered": 4, 00:17:16.143 "num_base_bdevs_operational": 4, 00:17:16.143 "base_bdevs_list": [ 00:17:16.143 { 00:17:16.143 "name": "BaseBdev1", 00:17:16.143 "uuid": "ccbaf08d-2fa2-5a22-8957-180e4f6b2c9d", 00:17:16.143 "is_configured": true, 00:17:16.143 "data_offset": 2048, 00:17:16.143 "data_size": 63488 00:17:16.143 }, 00:17:16.143 { 00:17:16.143 "name": "BaseBdev2", 00:17:16.143 "uuid": "c953ba64-9f98-5883-bda0-1bf92822f8cc", 00:17:16.143 "is_configured": true, 00:17:16.143 "data_offset": 2048, 00:17:16.143 "data_size": 63488 00:17:16.143 }, 00:17:16.143 { 00:17:16.143 "name": "BaseBdev3", 00:17:16.143 "uuid": "edbb14a4-d742-5b9f-ac5c-6704381294f4", 00:17:16.143 "is_configured": true, 00:17:16.143 "data_offset": 2048, 00:17:16.143 "data_size": 63488 00:17:16.143 }, 00:17:16.143 { 00:17:16.143 "name": "BaseBdev4", 00:17:16.143 "uuid": "5e726e9d-6cf0-56eb-8976-090245955dcc", 00:17:16.143 "is_configured": true, 00:17:16.143 "data_offset": 2048, 00:17:16.143 "data_size": 63488 00:17:16.143 } 00:17:16.143 ] 00:17:16.143 }' 00:17:16.143 18:20:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.143 18:20:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.711 18:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:16.711 18:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:16.711 [2024-07-24 18:20:25.167282] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15759b0 00:17:17.650 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.911 "name": "raid_bdev1", 00:17:17.911 "uuid": "8663bac4-84c3-4f30-ac85-0c9399ad5452", 00:17:17.911 "strip_size_kb": 64, 00:17:17.911 "state": "online", 00:17:17.911 "raid_level": "concat", 00:17:17.911 "superblock": true, 00:17:17.911 "num_base_bdevs": 4, 00:17:17.911 "num_base_bdevs_discovered": 4, 00:17:17.911 "num_base_bdevs_operational": 4, 00:17:17.911 "base_bdevs_list": [ 00:17:17.911 { 00:17:17.911 "name": "BaseBdev1", 00:17:17.911 "uuid": "ccbaf08d-2fa2-5a22-8957-180e4f6b2c9d", 00:17:17.911 "is_configured": true, 00:17:17.911 "data_offset": 2048, 00:17:17.911 "data_size": 63488 00:17:17.911 }, 00:17:17.911 { 00:17:17.911 "name": "BaseBdev2", 00:17:17.911 "uuid": "c953ba64-9f98-5883-bda0-1bf92822f8cc", 00:17:17.911 "is_configured": true, 00:17:17.911 "data_offset": 2048, 00:17:17.911 "data_size": 63488 00:17:17.911 }, 00:17:17.911 { 00:17:17.911 "name": "BaseBdev3", 00:17:17.911 "uuid": "edbb14a4-d742-5b9f-ac5c-6704381294f4", 00:17:17.911 "is_configured": true, 00:17:17.911 "data_offset": 2048, 00:17:17.911 "data_size": 63488 00:17:17.911 }, 00:17:17.911 { 00:17:17.911 "name": "BaseBdev4", 00:17:17.911 "uuid": "5e726e9d-6cf0-56eb-8976-090245955dcc", 00:17:17.911 "is_configured": true, 00:17:17.911 "data_offset": 2048, 00:17:17.911 "data_size": 63488 00:17:17.911 } 00:17:17.911 ] 00:17:17.911 }' 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.911 18:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.480 18:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:18.739 [2024-07-24 18:20:27.091650] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.739 [2024-07-24 18:20:27.091674] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.739 [2024-07-24 18:20:27.093777] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.740 [2024-07-24 18:20:27.093805] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.740 [2024-07-24 18:20:27.093832] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.740 [2024-07-24 18:20:27.093839] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1570b10 name raid_bdev1, state offline 00:17:18.740 0 00:17:18.740 18:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2239851 00:17:18.740 18:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2239851 ']' 00:17:18.740 18:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2239851 00:17:18.740 18:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:17:18.740 18:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.740 18:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2239851 00:17:18.740 18:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:18.740 18:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:18.740 18:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2239851' 00:17:18.740 killing process with pid 2239851 00:17:18.740 18:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2239851 00:17:18.740 [2024-07-24 18:20:27.167239] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:18.740 18:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2239851 00:17:18.740 [2024-07-24 18:20:27.192571] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:19.000 18:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.phaWDAlm21 00:17:19.000 18:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:19.000 18:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:19.000 18:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:17:19.000 18:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:19.000 18:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:19.000 18:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:19.000 18:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:17:19.000 00:17:19.000 real 0m5.971s 00:17:19.000 user 0m9.210s 00:17:19.000 sys 0m1.048s 00:17:19.000 18:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:19.000 18:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.000 ************************************ 00:17:19.000 END TEST raid_write_error_test 00:17:19.000 ************************************ 00:17:19.000 18:20:27 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:19.000 18:20:27 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:17:19.000 18:20:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:19.000 18:20:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:19.000 18:20:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.000 ************************************ 00:17:19.000 START TEST raid_state_function_test 00:17:19.000 ************************************ 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=2241013 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2241013' 00:17:19.000 Process raid pid: 2241013 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 2241013 /var/tmp/spdk-raid.sock 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 2241013 ']' 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:19.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:19.000 18:20:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.000 [2024-07-24 18:20:27.530107] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:17:19.000 [2024-07-24 18:20:27.530152] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:01.0 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:01.1 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:01.2 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:01.3 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:01.4 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:01.5 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:01.6 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:01.7 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:02.0 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:02.1 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:02.2 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:02.3 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:02.4 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:02.5 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:02.6 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b3:02.7 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b5:01.0 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b5:01.1 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b5:01.2 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b5:01.3 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b5:01.4 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b5:01.5 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b5:01.6 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b5:01.7 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b5:02.0 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b5:02.1 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b5:02.2 cannot be used 00:17:19.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.000 EAL: Requested device 0000:b5:02.3 cannot be used 00:17:19.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.001 EAL: Requested device 0000:b5:02.4 cannot be used 00:17:19.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.001 EAL: Requested device 0000:b5:02.5 cannot be used 00:17:19.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.001 EAL: Requested device 0000:b5:02.6 cannot be used 00:17:19.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:19.001 EAL: Requested device 0000:b5:02.7 cannot be used 00:17:19.260 [2024-07-24 18:20:27.623084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.260 [2024-07-24 18:20:27.689792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.260 [2024-07-24 18:20:27.746398] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.260 [2024-07-24 18:20:27.746422] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.828 18:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:19.828 18:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:17:19.828 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:20.088 [2024-07-24 18:20:28.473303] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:20.088 [2024-07-24 18:20:28.473336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:20.088 [2024-07-24 18:20:28.473343] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.088 [2024-07-24 18:20:28.473351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.088 [2024-07-24 18:20:28.473356] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:20.088 [2024-07-24 18:20:28.473364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:20.088 [2024-07-24 18:20:28.473369] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:20.088 [2024-07-24 18:20:28.473381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.088 "name": "Existed_Raid", 00:17:20.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.088 "strip_size_kb": 0, 00:17:20.088 "state": "configuring", 00:17:20.088 "raid_level": "raid1", 00:17:20.088 "superblock": false, 00:17:20.088 "num_base_bdevs": 4, 00:17:20.088 "num_base_bdevs_discovered": 0, 00:17:20.088 "num_base_bdevs_operational": 4, 00:17:20.088 "base_bdevs_list": [ 00:17:20.088 { 00:17:20.088 "name": "BaseBdev1", 00:17:20.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.088 "is_configured": false, 00:17:20.088 "data_offset": 0, 00:17:20.088 "data_size": 0 00:17:20.088 }, 00:17:20.088 { 00:17:20.088 "name": "BaseBdev2", 00:17:20.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.088 "is_configured": false, 00:17:20.088 "data_offset": 0, 00:17:20.088 "data_size": 0 00:17:20.088 }, 00:17:20.088 { 00:17:20.088 "name": "BaseBdev3", 00:17:20.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.088 "is_configured": false, 00:17:20.088 "data_offset": 0, 00:17:20.088 "data_size": 0 00:17:20.088 }, 00:17:20.088 { 00:17:20.088 "name": "BaseBdev4", 00:17:20.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.088 "is_configured": false, 00:17:20.088 "data_offset": 0, 00:17:20.088 "data_size": 0 00:17:20.088 } 00:17:20.088 ] 00:17:20.088 }' 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.088 18:20:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.656 18:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:20.916 [2024-07-24 18:20:29.327409] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:20.916 [2024-07-24 18:20:29.327429] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19b01e0 name Existed_Raid, state configuring 00:17:20.916 18:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:20.916 [2024-07-24 18:20:29.495851] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:20.916 [2024-07-24 18:20:29.495869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:20.916 [2024-07-24 18:20:29.495875] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.916 [2024-07-24 18:20:29.495882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.916 [2024-07-24 18:20:29.495888] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:20.916 [2024-07-24 18:20:29.495895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:20.916 [2024-07-24 18:20:29.495908] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:20.916 [2024-07-24 18:20:29.495915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:21.175 18:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:21.175 [2024-07-24 18:20:29.664751] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.175 BaseBdev1 00:17:21.175 18:20:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:21.175 18:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:21.175 18:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:21.175 18:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:21.175 18:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:21.175 18:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:21.175 18:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:21.435 18:20:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:21.435 [ 00:17:21.435 { 00:17:21.435 "name": "BaseBdev1", 00:17:21.435 "aliases": [ 00:17:21.435 "270138ee-cb4e-4be7-b07a-8f52b270e427" 00:17:21.435 ], 00:17:21.435 "product_name": "Malloc disk", 00:17:21.435 "block_size": 512, 00:17:21.435 "num_blocks": 65536, 00:17:21.435 "uuid": "270138ee-cb4e-4be7-b07a-8f52b270e427", 00:17:21.435 "assigned_rate_limits": { 00:17:21.435 "rw_ios_per_sec": 0, 00:17:21.435 "rw_mbytes_per_sec": 0, 00:17:21.435 "r_mbytes_per_sec": 0, 00:17:21.435 "w_mbytes_per_sec": 0 00:17:21.435 }, 00:17:21.435 "claimed": true, 00:17:21.435 "claim_type": "exclusive_write", 00:17:21.435 "zoned": false, 00:17:21.435 "supported_io_types": { 00:17:21.435 "read": true, 00:17:21.435 "write": true, 00:17:21.435 "unmap": true, 00:17:21.435 "flush": true, 00:17:21.435 "reset": true, 00:17:21.435 "nvme_admin": false, 00:17:21.435 "nvme_io": false, 00:17:21.435 "nvme_io_md": false, 00:17:21.435 "write_zeroes": true, 00:17:21.435 "zcopy": true, 00:17:21.435 "get_zone_info": false, 00:17:21.435 "zone_management": false, 00:17:21.435 "zone_append": false, 00:17:21.435 "compare": false, 00:17:21.435 "compare_and_write": false, 00:17:21.435 "abort": true, 00:17:21.435 "seek_hole": false, 00:17:21.435 "seek_data": false, 00:17:21.435 "copy": true, 00:17:21.435 "nvme_iov_md": false 00:17:21.435 }, 00:17:21.435 "memory_domains": [ 00:17:21.435 { 00:17:21.435 "dma_device_id": "system", 00:17:21.435 "dma_device_type": 1 00:17:21.435 }, 00:17:21.435 { 00:17:21.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.435 "dma_device_type": 2 00:17:21.435 } 00:17:21.435 ], 00:17:21.435 "driver_specific": {} 00:17:21.435 } 00:17:21.435 ] 00:17:21.435 18:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:21.435 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:21.435 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:21.435 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:21.435 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:21.435 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:21.435 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:21.435 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.435 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.435 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.435 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.435 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.435 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.694 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.694 "name": "Existed_Raid", 00:17:21.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.694 "strip_size_kb": 0, 00:17:21.694 "state": "configuring", 00:17:21.694 "raid_level": "raid1", 00:17:21.694 "superblock": false, 00:17:21.694 "num_base_bdevs": 4, 00:17:21.694 "num_base_bdevs_discovered": 1, 00:17:21.694 "num_base_bdevs_operational": 4, 00:17:21.694 "base_bdevs_list": [ 00:17:21.694 { 00:17:21.694 "name": "BaseBdev1", 00:17:21.694 "uuid": "270138ee-cb4e-4be7-b07a-8f52b270e427", 00:17:21.694 "is_configured": true, 00:17:21.694 "data_offset": 0, 00:17:21.694 "data_size": 65536 00:17:21.694 }, 00:17:21.694 { 00:17:21.694 "name": "BaseBdev2", 00:17:21.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.694 "is_configured": false, 00:17:21.694 "data_offset": 0, 00:17:21.695 "data_size": 0 00:17:21.695 }, 00:17:21.695 { 00:17:21.695 "name": "BaseBdev3", 00:17:21.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.695 "is_configured": false, 00:17:21.695 "data_offset": 0, 00:17:21.695 "data_size": 0 00:17:21.695 }, 00:17:21.695 { 00:17:21.695 "name": "BaseBdev4", 00:17:21.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.695 "is_configured": false, 00:17:21.695 "data_offset": 0, 00:17:21.695 "data_size": 0 00:17:21.695 } 00:17:21.695 ] 00:17:21.695 }' 00:17:21.695 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.695 18:20:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.263 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:22.263 [2024-07-24 18:20:30.843766] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:22.263 [2024-07-24 18:20:30.843796] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19afa50 name Existed_Raid, state configuring 00:17:22.522 18:20:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:22.522 [2024-07-24 18:20:31.016232] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.522 [2024-07-24 18:20:31.017255] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.522 [2024-07-24 18:20:31.017280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.522 [2024-07-24 18:20:31.017287] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:22.522 [2024-07-24 18:20:31.017294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:22.522 [2024-07-24 18:20:31.017299] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:22.522 [2024-07-24 18:20:31.017306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:22.522 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:22.522 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:22.522 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:22.523 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:22.523 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:22.523 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:22.523 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:22.523 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:22.523 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.523 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.523 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.523 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.523 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.523 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.782 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.782 "name": "Existed_Raid", 00:17:22.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.782 "strip_size_kb": 0, 00:17:22.782 "state": "configuring", 00:17:22.782 "raid_level": "raid1", 00:17:22.782 "superblock": false, 00:17:22.782 "num_base_bdevs": 4, 00:17:22.782 "num_base_bdevs_discovered": 1, 00:17:22.782 "num_base_bdevs_operational": 4, 00:17:22.782 "base_bdevs_list": [ 00:17:22.782 { 00:17:22.782 "name": "BaseBdev1", 00:17:22.782 "uuid": "270138ee-cb4e-4be7-b07a-8f52b270e427", 00:17:22.782 "is_configured": true, 00:17:22.782 "data_offset": 0, 00:17:22.782 "data_size": 65536 00:17:22.782 }, 00:17:22.782 { 00:17:22.782 "name": "BaseBdev2", 00:17:22.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.782 "is_configured": false, 00:17:22.782 "data_offset": 0, 00:17:22.782 "data_size": 0 00:17:22.782 }, 00:17:22.782 { 00:17:22.782 "name": "BaseBdev3", 00:17:22.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.782 "is_configured": false, 00:17:22.782 "data_offset": 0, 00:17:22.782 "data_size": 0 00:17:22.782 }, 00:17:22.782 { 00:17:22.782 "name": "BaseBdev4", 00:17:22.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.782 "is_configured": false, 00:17:22.782 "data_offset": 0, 00:17:22.782 "data_size": 0 00:17:22.782 } 00:17:22.782 ] 00:17:22.782 }' 00:17:22.782 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.782 18:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.351 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:23.351 [2024-07-24 18:20:31.849128] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.351 BaseBdev2 00:17:23.351 18:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:23.351 18:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:23.351 18:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:23.351 18:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:23.351 18:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:23.351 18:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:23.351 18:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:23.610 [ 00:17:23.610 { 00:17:23.610 "name": "BaseBdev2", 00:17:23.610 "aliases": [ 00:17:23.610 "85f4a2cd-359d-4315-88e8-c41aa6653c64" 00:17:23.610 ], 00:17:23.610 "product_name": "Malloc disk", 00:17:23.610 "block_size": 512, 00:17:23.610 "num_blocks": 65536, 00:17:23.610 "uuid": "85f4a2cd-359d-4315-88e8-c41aa6653c64", 00:17:23.610 "assigned_rate_limits": { 00:17:23.610 "rw_ios_per_sec": 0, 00:17:23.610 "rw_mbytes_per_sec": 0, 00:17:23.610 "r_mbytes_per_sec": 0, 00:17:23.610 "w_mbytes_per_sec": 0 00:17:23.610 }, 00:17:23.610 "claimed": true, 00:17:23.610 "claim_type": "exclusive_write", 00:17:23.610 "zoned": false, 00:17:23.610 "supported_io_types": { 00:17:23.610 "read": true, 00:17:23.610 "write": true, 00:17:23.610 "unmap": true, 00:17:23.610 "flush": true, 00:17:23.610 "reset": true, 00:17:23.610 "nvme_admin": false, 00:17:23.610 "nvme_io": false, 00:17:23.610 "nvme_io_md": false, 00:17:23.610 "write_zeroes": true, 00:17:23.610 "zcopy": true, 00:17:23.610 "get_zone_info": false, 00:17:23.610 "zone_management": false, 00:17:23.610 "zone_append": false, 00:17:23.610 "compare": false, 00:17:23.610 "compare_and_write": false, 00:17:23.610 "abort": true, 00:17:23.610 "seek_hole": false, 00:17:23.610 "seek_data": false, 00:17:23.610 "copy": true, 00:17:23.610 "nvme_iov_md": false 00:17:23.610 }, 00:17:23.610 "memory_domains": [ 00:17:23.610 { 00:17:23.610 "dma_device_id": "system", 00:17:23.610 "dma_device_type": 1 00:17:23.610 }, 00:17:23.610 { 00:17:23.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.610 "dma_device_type": 2 00:17:23.610 } 00:17:23.610 ], 00:17:23.610 "driver_specific": {} 00:17:23.610 } 00:17:23.610 ] 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.610 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.611 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.870 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.870 "name": "Existed_Raid", 00:17:23.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.870 "strip_size_kb": 0, 00:17:23.870 "state": "configuring", 00:17:23.870 "raid_level": "raid1", 00:17:23.870 "superblock": false, 00:17:23.870 "num_base_bdevs": 4, 00:17:23.870 "num_base_bdevs_discovered": 2, 00:17:23.870 "num_base_bdevs_operational": 4, 00:17:23.870 "base_bdevs_list": [ 00:17:23.870 { 00:17:23.870 "name": "BaseBdev1", 00:17:23.870 "uuid": "270138ee-cb4e-4be7-b07a-8f52b270e427", 00:17:23.870 "is_configured": true, 00:17:23.870 "data_offset": 0, 00:17:23.870 "data_size": 65536 00:17:23.870 }, 00:17:23.870 { 00:17:23.870 "name": "BaseBdev2", 00:17:23.870 "uuid": "85f4a2cd-359d-4315-88e8-c41aa6653c64", 00:17:23.870 "is_configured": true, 00:17:23.870 "data_offset": 0, 00:17:23.870 "data_size": 65536 00:17:23.870 }, 00:17:23.870 { 00:17:23.870 "name": "BaseBdev3", 00:17:23.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.870 "is_configured": false, 00:17:23.870 "data_offset": 0, 00:17:23.870 "data_size": 0 00:17:23.870 }, 00:17:23.870 { 00:17:23.870 "name": "BaseBdev4", 00:17:23.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.870 "is_configured": false, 00:17:23.870 "data_offset": 0, 00:17:23.870 "data_size": 0 00:17:23.870 } 00:17:23.870 ] 00:17:23.870 }' 00:17:23.870 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.870 18:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.475 18:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:24.475 [2024-07-24 18:20:33.018979] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:24.475 BaseBdev3 00:17:24.475 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:24.475 18:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:24.475 18:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:24.475 18:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:24.475 18:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:24.475 18:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:24.475 18:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.734 18:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:24.992 [ 00:17:24.992 { 00:17:24.992 "name": "BaseBdev3", 00:17:24.992 "aliases": [ 00:17:24.992 "6f17ff8a-3dbc-4e1d-b9e7-10718f612071" 00:17:24.992 ], 00:17:24.992 "product_name": "Malloc disk", 00:17:24.992 "block_size": 512, 00:17:24.992 "num_blocks": 65536, 00:17:24.992 "uuid": "6f17ff8a-3dbc-4e1d-b9e7-10718f612071", 00:17:24.992 "assigned_rate_limits": { 00:17:24.992 "rw_ios_per_sec": 0, 00:17:24.992 "rw_mbytes_per_sec": 0, 00:17:24.992 "r_mbytes_per_sec": 0, 00:17:24.992 "w_mbytes_per_sec": 0 00:17:24.992 }, 00:17:24.992 "claimed": true, 00:17:24.992 "claim_type": "exclusive_write", 00:17:24.992 "zoned": false, 00:17:24.992 "supported_io_types": { 00:17:24.992 "read": true, 00:17:24.992 "write": true, 00:17:24.992 "unmap": true, 00:17:24.992 "flush": true, 00:17:24.992 "reset": true, 00:17:24.993 "nvme_admin": false, 00:17:24.993 "nvme_io": false, 00:17:24.993 "nvme_io_md": false, 00:17:24.993 "write_zeroes": true, 00:17:24.993 "zcopy": true, 00:17:24.993 "get_zone_info": false, 00:17:24.993 "zone_management": false, 00:17:24.993 "zone_append": false, 00:17:24.993 "compare": false, 00:17:24.993 "compare_and_write": false, 00:17:24.993 "abort": true, 00:17:24.993 "seek_hole": false, 00:17:24.993 "seek_data": false, 00:17:24.993 "copy": true, 00:17:24.993 "nvme_iov_md": false 00:17:24.993 }, 00:17:24.993 "memory_domains": [ 00:17:24.993 { 00:17:24.993 "dma_device_id": "system", 00:17:24.993 "dma_device_type": 1 00:17:24.993 }, 00:17:24.993 { 00:17:24.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.993 "dma_device_type": 2 00:17:24.993 } 00:17:24.993 ], 00:17:24.993 "driver_specific": {} 00:17:24.993 } 00:17:24.993 ] 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.993 "name": "Existed_Raid", 00:17:24.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.993 "strip_size_kb": 0, 00:17:24.993 "state": "configuring", 00:17:24.993 "raid_level": "raid1", 00:17:24.993 "superblock": false, 00:17:24.993 "num_base_bdevs": 4, 00:17:24.993 "num_base_bdevs_discovered": 3, 00:17:24.993 "num_base_bdevs_operational": 4, 00:17:24.993 "base_bdevs_list": [ 00:17:24.993 { 00:17:24.993 "name": "BaseBdev1", 00:17:24.993 "uuid": "270138ee-cb4e-4be7-b07a-8f52b270e427", 00:17:24.993 "is_configured": true, 00:17:24.993 "data_offset": 0, 00:17:24.993 "data_size": 65536 00:17:24.993 }, 00:17:24.993 { 00:17:24.993 "name": "BaseBdev2", 00:17:24.993 "uuid": "85f4a2cd-359d-4315-88e8-c41aa6653c64", 00:17:24.993 "is_configured": true, 00:17:24.993 "data_offset": 0, 00:17:24.993 "data_size": 65536 00:17:24.993 }, 00:17:24.993 { 00:17:24.993 "name": "BaseBdev3", 00:17:24.993 "uuid": "6f17ff8a-3dbc-4e1d-b9e7-10718f612071", 00:17:24.993 "is_configured": true, 00:17:24.993 "data_offset": 0, 00:17:24.993 "data_size": 65536 00:17:24.993 }, 00:17:24.993 { 00:17:24.993 "name": "BaseBdev4", 00:17:24.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.993 "is_configured": false, 00:17:24.993 "data_offset": 0, 00:17:24.993 "data_size": 0 00:17:24.993 } 00:17:24.993 ] 00:17:24.993 }' 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.993 18:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.560 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:25.819 [2024-07-24 18:20:34.184861] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:25.819 [2024-07-24 18:20:34.184892] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x19b0ab0 00:17:25.819 [2024-07-24 18:20:34.184897] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:25.819 [2024-07-24 18:20:34.185030] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b63cd0 00:17:25.819 [2024-07-24 18:20:34.185116] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19b0ab0 00:17:25.819 [2024-07-24 18:20:34.185122] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x19b0ab0 00:17:25.819 [2024-07-24 18:20:34.185239] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.819 BaseBdev4 00:17:25.819 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:25.819 18:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:25.819 18:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:25.819 18:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:25.819 18:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:25.819 18:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:25.819 18:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.819 18:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:26.079 [ 00:17:26.079 { 00:17:26.079 "name": "BaseBdev4", 00:17:26.079 "aliases": [ 00:17:26.079 "38dbbd34-c9a8-4715-a2f6-b90ef80d44f1" 00:17:26.079 ], 00:17:26.079 "product_name": "Malloc disk", 00:17:26.079 "block_size": 512, 00:17:26.079 "num_blocks": 65536, 00:17:26.079 "uuid": "38dbbd34-c9a8-4715-a2f6-b90ef80d44f1", 00:17:26.079 "assigned_rate_limits": { 00:17:26.079 "rw_ios_per_sec": 0, 00:17:26.079 "rw_mbytes_per_sec": 0, 00:17:26.079 "r_mbytes_per_sec": 0, 00:17:26.079 "w_mbytes_per_sec": 0 00:17:26.079 }, 00:17:26.079 "claimed": true, 00:17:26.079 "claim_type": "exclusive_write", 00:17:26.079 "zoned": false, 00:17:26.079 "supported_io_types": { 00:17:26.079 "read": true, 00:17:26.079 "write": true, 00:17:26.079 "unmap": true, 00:17:26.079 "flush": true, 00:17:26.079 "reset": true, 00:17:26.079 "nvme_admin": false, 00:17:26.079 "nvme_io": false, 00:17:26.079 "nvme_io_md": false, 00:17:26.079 "write_zeroes": true, 00:17:26.079 "zcopy": true, 00:17:26.079 "get_zone_info": false, 00:17:26.079 "zone_management": false, 00:17:26.079 "zone_append": false, 00:17:26.079 "compare": false, 00:17:26.079 "compare_and_write": false, 00:17:26.079 "abort": true, 00:17:26.079 "seek_hole": false, 00:17:26.079 "seek_data": false, 00:17:26.079 "copy": true, 00:17:26.079 "nvme_iov_md": false 00:17:26.079 }, 00:17:26.079 "memory_domains": [ 00:17:26.079 { 00:17:26.079 "dma_device_id": "system", 00:17:26.079 "dma_device_type": 1 00:17:26.079 }, 00:17:26.079 { 00:17:26.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.079 "dma_device_type": 2 00:17:26.079 } 00:17:26.079 ], 00:17:26.079 "driver_specific": {} 00:17:26.079 } 00:17:26.079 ] 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.079 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.339 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.339 "name": "Existed_Raid", 00:17:26.339 "uuid": "c2fba800-6de3-4612-b776-9a67e6d21c7f", 00:17:26.339 "strip_size_kb": 0, 00:17:26.339 "state": "online", 00:17:26.339 "raid_level": "raid1", 00:17:26.339 "superblock": false, 00:17:26.339 "num_base_bdevs": 4, 00:17:26.339 "num_base_bdevs_discovered": 4, 00:17:26.339 "num_base_bdevs_operational": 4, 00:17:26.339 "base_bdevs_list": [ 00:17:26.339 { 00:17:26.339 "name": "BaseBdev1", 00:17:26.339 "uuid": "270138ee-cb4e-4be7-b07a-8f52b270e427", 00:17:26.339 "is_configured": true, 00:17:26.339 "data_offset": 0, 00:17:26.339 "data_size": 65536 00:17:26.339 }, 00:17:26.339 { 00:17:26.339 "name": "BaseBdev2", 00:17:26.339 "uuid": "85f4a2cd-359d-4315-88e8-c41aa6653c64", 00:17:26.339 "is_configured": true, 00:17:26.339 "data_offset": 0, 00:17:26.339 "data_size": 65536 00:17:26.339 }, 00:17:26.339 { 00:17:26.339 "name": "BaseBdev3", 00:17:26.339 "uuid": "6f17ff8a-3dbc-4e1d-b9e7-10718f612071", 00:17:26.339 "is_configured": true, 00:17:26.339 "data_offset": 0, 00:17:26.339 "data_size": 65536 00:17:26.339 }, 00:17:26.339 { 00:17:26.339 "name": "BaseBdev4", 00:17:26.339 "uuid": "38dbbd34-c9a8-4715-a2f6-b90ef80d44f1", 00:17:26.339 "is_configured": true, 00:17:26.339 "data_offset": 0, 00:17:26.339 "data_size": 65536 00:17:26.339 } 00:17:26.339 ] 00:17:26.339 }' 00:17:26.339 18:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.339 18:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.598 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:26.598 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:26.598 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:26.598 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:26.598 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:26.598 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:26.598 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:26.598 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:26.857 [2024-07-24 18:20:35.320007] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.857 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:26.857 "name": "Existed_Raid", 00:17:26.857 "aliases": [ 00:17:26.857 "c2fba800-6de3-4612-b776-9a67e6d21c7f" 00:17:26.857 ], 00:17:26.857 "product_name": "Raid Volume", 00:17:26.857 "block_size": 512, 00:17:26.857 "num_blocks": 65536, 00:17:26.857 "uuid": "c2fba800-6de3-4612-b776-9a67e6d21c7f", 00:17:26.857 "assigned_rate_limits": { 00:17:26.857 "rw_ios_per_sec": 0, 00:17:26.857 "rw_mbytes_per_sec": 0, 00:17:26.857 "r_mbytes_per_sec": 0, 00:17:26.857 "w_mbytes_per_sec": 0 00:17:26.857 }, 00:17:26.857 "claimed": false, 00:17:26.857 "zoned": false, 00:17:26.857 "supported_io_types": { 00:17:26.857 "read": true, 00:17:26.857 "write": true, 00:17:26.857 "unmap": false, 00:17:26.857 "flush": false, 00:17:26.857 "reset": true, 00:17:26.857 "nvme_admin": false, 00:17:26.857 "nvme_io": false, 00:17:26.857 "nvme_io_md": false, 00:17:26.857 "write_zeroes": true, 00:17:26.857 "zcopy": false, 00:17:26.857 "get_zone_info": false, 00:17:26.857 "zone_management": false, 00:17:26.857 "zone_append": false, 00:17:26.857 "compare": false, 00:17:26.857 "compare_and_write": false, 00:17:26.857 "abort": false, 00:17:26.857 "seek_hole": false, 00:17:26.857 "seek_data": false, 00:17:26.857 "copy": false, 00:17:26.857 "nvme_iov_md": false 00:17:26.857 }, 00:17:26.857 "memory_domains": [ 00:17:26.857 { 00:17:26.857 "dma_device_id": "system", 00:17:26.858 "dma_device_type": 1 00:17:26.858 }, 00:17:26.858 { 00:17:26.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.858 "dma_device_type": 2 00:17:26.858 }, 00:17:26.858 { 00:17:26.858 "dma_device_id": "system", 00:17:26.858 "dma_device_type": 1 00:17:26.858 }, 00:17:26.858 { 00:17:26.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.858 "dma_device_type": 2 00:17:26.858 }, 00:17:26.858 { 00:17:26.858 "dma_device_id": "system", 00:17:26.858 "dma_device_type": 1 00:17:26.858 }, 00:17:26.858 { 00:17:26.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.858 "dma_device_type": 2 00:17:26.858 }, 00:17:26.858 { 00:17:26.858 "dma_device_id": "system", 00:17:26.858 "dma_device_type": 1 00:17:26.858 }, 00:17:26.858 { 00:17:26.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.858 "dma_device_type": 2 00:17:26.858 } 00:17:26.858 ], 00:17:26.858 "driver_specific": { 00:17:26.858 "raid": { 00:17:26.858 "uuid": "c2fba800-6de3-4612-b776-9a67e6d21c7f", 00:17:26.858 "strip_size_kb": 0, 00:17:26.858 "state": "online", 00:17:26.858 "raid_level": "raid1", 00:17:26.858 "superblock": false, 00:17:26.858 "num_base_bdevs": 4, 00:17:26.858 "num_base_bdevs_discovered": 4, 00:17:26.858 "num_base_bdevs_operational": 4, 00:17:26.858 "base_bdevs_list": [ 00:17:26.858 { 00:17:26.858 "name": "BaseBdev1", 00:17:26.858 "uuid": "270138ee-cb4e-4be7-b07a-8f52b270e427", 00:17:26.858 "is_configured": true, 00:17:26.858 "data_offset": 0, 00:17:26.858 "data_size": 65536 00:17:26.858 }, 00:17:26.858 { 00:17:26.858 "name": "BaseBdev2", 00:17:26.858 "uuid": "85f4a2cd-359d-4315-88e8-c41aa6653c64", 00:17:26.858 "is_configured": true, 00:17:26.858 "data_offset": 0, 00:17:26.858 "data_size": 65536 00:17:26.858 }, 00:17:26.858 { 00:17:26.858 "name": "BaseBdev3", 00:17:26.858 "uuid": "6f17ff8a-3dbc-4e1d-b9e7-10718f612071", 00:17:26.858 "is_configured": true, 00:17:26.858 "data_offset": 0, 00:17:26.858 "data_size": 65536 00:17:26.858 }, 00:17:26.858 { 00:17:26.858 "name": "BaseBdev4", 00:17:26.858 "uuid": "38dbbd34-c9a8-4715-a2f6-b90ef80d44f1", 00:17:26.858 "is_configured": true, 00:17:26.858 "data_offset": 0, 00:17:26.858 "data_size": 65536 00:17:26.858 } 00:17:26.858 ] 00:17:26.858 } 00:17:26.858 } 00:17:26.858 }' 00:17:26.858 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:26.858 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:26.858 BaseBdev2 00:17:26.858 BaseBdev3 00:17:26.858 BaseBdev4' 00:17:26.858 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:26.858 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:26.858 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.117 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.117 "name": "BaseBdev1", 00:17:27.117 "aliases": [ 00:17:27.117 "270138ee-cb4e-4be7-b07a-8f52b270e427" 00:17:27.117 ], 00:17:27.117 "product_name": "Malloc disk", 00:17:27.117 "block_size": 512, 00:17:27.117 "num_blocks": 65536, 00:17:27.117 "uuid": "270138ee-cb4e-4be7-b07a-8f52b270e427", 00:17:27.117 "assigned_rate_limits": { 00:17:27.117 "rw_ios_per_sec": 0, 00:17:27.117 "rw_mbytes_per_sec": 0, 00:17:27.117 "r_mbytes_per_sec": 0, 00:17:27.117 "w_mbytes_per_sec": 0 00:17:27.117 }, 00:17:27.117 "claimed": true, 00:17:27.117 "claim_type": "exclusive_write", 00:17:27.117 "zoned": false, 00:17:27.117 "supported_io_types": { 00:17:27.117 "read": true, 00:17:27.117 "write": true, 00:17:27.117 "unmap": true, 00:17:27.117 "flush": true, 00:17:27.117 "reset": true, 00:17:27.117 "nvme_admin": false, 00:17:27.117 "nvme_io": false, 00:17:27.117 "nvme_io_md": false, 00:17:27.117 "write_zeroes": true, 00:17:27.117 "zcopy": true, 00:17:27.117 "get_zone_info": false, 00:17:27.117 "zone_management": false, 00:17:27.117 "zone_append": false, 00:17:27.117 "compare": false, 00:17:27.117 "compare_and_write": false, 00:17:27.117 "abort": true, 00:17:27.117 "seek_hole": false, 00:17:27.117 "seek_data": false, 00:17:27.117 "copy": true, 00:17:27.117 "nvme_iov_md": false 00:17:27.117 }, 00:17:27.117 "memory_domains": [ 00:17:27.117 { 00:17:27.117 "dma_device_id": "system", 00:17:27.117 "dma_device_type": 1 00:17:27.117 }, 00:17:27.117 { 00:17:27.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.117 "dma_device_type": 2 00:17:27.117 } 00:17:27.117 ], 00:17:27.117 "driver_specific": {} 00:17:27.117 }' 00:17:27.117 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.117 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.117 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:27.117 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.117 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.377 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:27.377 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.377 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.377 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:27.377 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.377 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.377 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:27.377 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.377 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:27.377 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.637 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.637 "name": "BaseBdev2", 00:17:27.637 "aliases": [ 00:17:27.637 "85f4a2cd-359d-4315-88e8-c41aa6653c64" 00:17:27.637 ], 00:17:27.637 "product_name": "Malloc disk", 00:17:27.637 "block_size": 512, 00:17:27.637 "num_blocks": 65536, 00:17:27.637 "uuid": "85f4a2cd-359d-4315-88e8-c41aa6653c64", 00:17:27.637 "assigned_rate_limits": { 00:17:27.637 "rw_ios_per_sec": 0, 00:17:27.637 "rw_mbytes_per_sec": 0, 00:17:27.637 "r_mbytes_per_sec": 0, 00:17:27.637 "w_mbytes_per_sec": 0 00:17:27.637 }, 00:17:27.637 "claimed": true, 00:17:27.637 "claim_type": "exclusive_write", 00:17:27.637 "zoned": false, 00:17:27.637 "supported_io_types": { 00:17:27.637 "read": true, 00:17:27.637 "write": true, 00:17:27.637 "unmap": true, 00:17:27.637 "flush": true, 00:17:27.637 "reset": true, 00:17:27.637 "nvme_admin": false, 00:17:27.637 "nvme_io": false, 00:17:27.637 "nvme_io_md": false, 00:17:27.637 "write_zeroes": true, 00:17:27.637 "zcopy": true, 00:17:27.637 "get_zone_info": false, 00:17:27.637 "zone_management": false, 00:17:27.637 "zone_append": false, 00:17:27.637 "compare": false, 00:17:27.637 "compare_and_write": false, 00:17:27.637 "abort": true, 00:17:27.637 "seek_hole": false, 00:17:27.637 "seek_data": false, 00:17:27.637 "copy": true, 00:17:27.637 "nvme_iov_md": false 00:17:27.637 }, 00:17:27.637 "memory_domains": [ 00:17:27.637 { 00:17:27.637 "dma_device_id": "system", 00:17:27.637 "dma_device_type": 1 00:17:27.637 }, 00:17:27.637 { 00:17:27.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.637 "dma_device_type": 2 00:17:27.637 } 00:17:27.637 ], 00:17:27.637 "driver_specific": {} 00:17:27.637 }' 00:17:27.637 18:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.637 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.637 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:27.637 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.637 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.637 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:27.637 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.637 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.896 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:27.896 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.896 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.896 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:27.896 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.896 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:27.896 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:28.156 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:28.156 "name": "BaseBdev3", 00:17:28.156 "aliases": [ 00:17:28.156 "6f17ff8a-3dbc-4e1d-b9e7-10718f612071" 00:17:28.156 ], 00:17:28.156 "product_name": "Malloc disk", 00:17:28.156 "block_size": 512, 00:17:28.156 "num_blocks": 65536, 00:17:28.156 "uuid": "6f17ff8a-3dbc-4e1d-b9e7-10718f612071", 00:17:28.156 "assigned_rate_limits": { 00:17:28.156 "rw_ios_per_sec": 0, 00:17:28.156 "rw_mbytes_per_sec": 0, 00:17:28.156 "r_mbytes_per_sec": 0, 00:17:28.156 "w_mbytes_per_sec": 0 00:17:28.156 }, 00:17:28.156 "claimed": true, 00:17:28.156 "claim_type": "exclusive_write", 00:17:28.156 "zoned": false, 00:17:28.156 "supported_io_types": { 00:17:28.156 "read": true, 00:17:28.156 "write": true, 00:17:28.156 "unmap": true, 00:17:28.156 "flush": true, 00:17:28.156 "reset": true, 00:17:28.156 "nvme_admin": false, 00:17:28.156 "nvme_io": false, 00:17:28.156 "nvme_io_md": false, 00:17:28.156 "write_zeroes": true, 00:17:28.156 "zcopy": true, 00:17:28.156 "get_zone_info": false, 00:17:28.156 "zone_management": false, 00:17:28.156 "zone_append": false, 00:17:28.156 "compare": false, 00:17:28.156 "compare_and_write": false, 00:17:28.156 "abort": true, 00:17:28.156 "seek_hole": false, 00:17:28.156 "seek_data": false, 00:17:28.156 "copy": true, 00:17:28.156 "nvme_iov_md": false 00:17:28.156 }, 00:17:28.156 "memory_domains": [ 00:17:28.156 { 00:17:28.156 "dma_device_id": "system", 00:17:28.156 "dma_device_type": 1 00:17:28.156 }, 00:17:28.156 { 00:17:28.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.156 "dma_device_type": 2 00:17:28.156 } 00:17:28.156 ], 00:17:28.156 "driver_specific": {} 00:17:28.156 }' 00:17:28.156 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.156 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.156 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.156 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.156 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.156 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:28.156 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.156 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.156 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.156 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.156 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.415 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.415 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:28.415 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:28.415 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:28.415 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:28.415 "name": "BaseBdev4", 00:17:28.415 "aliases": [ 00:17:28.415 "38dbbd34-c9a8-4715-a2f6-b90ef80d44f1" 00:17:28.415 ], 00:17:28.415 "product_name": "Malloc disk", 00:17:28.415 "block_size": 512, 00:17:28.415 "num_blocks": 65536, 00:17:28.415 "uuid": "38dbbd34-c9a8-4715-a2f6-b90ef80d44f1", 00:17:28.415 "assigned_rate_limits": { 00:17:28.415 "rw_ios_per_sec": 0, 00:17:28.416 "rw_mbytes_per_sec": 0, 00:17:28.416 "r_mbytes_per_sec": 0, 00:17:28.416 "w_mbytes_per_sec": 0 00:17:28.416 }, 00:17:28.416 "claimed": true, 00:17:28.416 "claim_type": "exclusive_write", 00:17:28.416 "zoned": false, 00:17:28.416 "supported_io_types": { 00:17:28.416 "read": true, 00:17:28.416 "write": true, 00:17:28.416 "unmap": true, 00:17:28.416 "flush": true, 00:17:28.416 "reset": true, 00:17:28.416 "nvme_admin": false, 00:17:28.416 "nvme_io": false, 00:17:28.416 "nvme_io_md": false, 00:17:28.416 "write_zeroes": true, 00:17:28.416 "zcopy": true, 00:17:28.416 "get_zone_info": false, 00:17:28.416 "zone_management": false, 00:17:28.416 "zone_append": false, 00:17:28.416 "compare": false, 00:17:28.416 "compare_and_write": false, 00:17:28.416 "abort": true, 00:17:28.416 "seek_hole": false, 00:17:28.416 "seek_data": false, 00:17:28.416 "copy": true, 00:17:28.416 "nvme_iov_md": false 00:17:28.416 }, 00:17:28.416 "memory_domains": [ 00:17:28.416 { 00:17:28.416 "dma_device_id": "system", 00:17:28.416 "dma_device_type": 1 00:17:28.416 }, 00:17:28.416 { 00:17:28.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.416 "dma_device_type": 2 00:17:28.416 } 00:17:28.416 ], 00:17:28.416 "driver_specific": {} 00:17:28.416 }' 00:17:28.416 18:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.416 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.674 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.675 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.675 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.675 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:28.675 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.675 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.675 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.675 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.675 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:28.933 [2024-07-24 18:20:37.429243] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.933 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.192 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.192 "name": "Existed_Raid", 00:17:29.192 "uuid": "c2fba800-6de3-4612-b776-9a67e6d21c7f", 00:17:29.192 "strip_size_kb": 0, 00:17:29.192 "state": "online", 00:17:29.192 "raid_level": "raid1", 00:17:29.192 "superblock": false, 00:17:29.192 "num_base_bdevs": 4, 00:17:29.192 "num_base_bdevs_discovered": 3, 00:17:29.192 "num_base_bdevs_operational": 3, 00:17:29.192 "base_bdevs_list": [ 00:17:29.192 { 00:17:29.192 "name": null, 00:17:29.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.192 "is_configured": false, 00:17:29.192 "data_offset": 0, 00:17:29.192 "data_size": 65536 00:17:29.192 }, 00:17:29.192 { 00:17:29.192 "name": "BaseBdev2", 00:17:29.192 "uuid": "85f4a2cd-359d-4315-88e8-c41aa6653c64", 00:17:29.192 "is_configured": true, 00:17:29.192 "data_offset": 0, 00:17:29.192 "data_size": 65536 00:17:29.192 }, 00:17:29.192 { 00:17:29.192 "name": "BaseBdev3", 00:17:29.192 "uuid": "6f17ff8a-3dbc-4e1d-b9e7-10718f612071", 00:17:29.192 "is_configured": true, 00:17:29.192 "data_offset": 0, 00:17:29.192 "data_size": 65536 00:17:29.192 }, 00:17:29.192 { 00:17:29.192 "name": "BaseBdev4", 00:17:29.192 "uuid": "38dbbd34-c9a8-4715-a2f6-b90ef80d44f1", 00:17:29.192 "is_configured": true, 00:17:29.192 "data_offset": 0, 00:17:29.192 "data_size": 65536 00:17:29.192 } 00:17:29.192 ] 00:17:29.192 }' 00:17:29.192 18:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.192 18:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.758 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:29.758 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:29.758 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.758 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:29.758 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:29.758 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:29.758 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:30.017 [2024-07-24 18:20:38.460709] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:30.017 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:30.017 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:30.017 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.017 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:30.276 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:30.276 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:30.276 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:30.276 [2024-07-24 18:20:38.815570] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:30.276 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:30.276 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:30.276 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.276 18:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:30.535 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:30.535 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:30.535 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:30.794 [2024-07-24 18:20:39.165963] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:30.794 [2024-07-24 18:20:39.166020] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.794 [2024-07-24 18:20:39.175630] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.794 [2024-07-24 18:20:39.175652] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.794 [2024-07-24 18:20:39.175660] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19b0ab0 name Existed_Raid, state offline 00:17:30.794 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:30.794 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:30.794 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.794 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:30.794 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:30.794 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:30.794 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:30.794 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:30.794 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:30.794 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:31.053 BaseBdev2 00:17:31.053 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:31.053 18:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:31.053 18:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:31.053 18:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:31.053 18:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:31.053 18:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:31.053 18:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:31.312 18:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:31.312 [ 00:17:31.312 { 00:17:31.312 "name": "BaseBdev2", 00:17:31.312 "aliases": [ 00:17:31.312 "46251d32-6205-47fd-acab-926ed57b309d" 00:17:31.312 ], 00:17:31.312 "product_name": "Malloc disk", 00:17:31.312 "block_size": 512, 00:17:31.312 "num_blocks": 65536, 00:17:31.312 "uuid": "46251d32-6205-47fd-acab-926ed57b309d", 00:17:31.312 "assigned_rate_limits": { 00:17:31.312 "rw_ios_per_sec": 0, 00:17:31.312 "rw_mbytes_per_sec": 0, 00:17:31.312 "r_mbytes_per_sec": 0, 00:17:31.312 "w_mbytes_per_sec": 0 00:17:31.312 }, 00:17:31.312 "claimed": false, 00:17:31.312 "zoned": false, 00:17:31.312 "supported_io_types": { 00:17:31.312 "read": true, 00:17:31.312 "write": true, 00:17:31.312 "unmap": true, 00:17:31.312 "flush": true, 00:17:31.312 "reset": true, 00:17:31.312 "nvme_admin": false, 00:17:31.312 "nvme_io": false, 00:17:31.312 "nvme_io_md": false, 00:17:31.312 "write_zeroes": true, 00:17:31.312 "zcopy": true, 00:17:31.312 "get_zone_info": false, 00:17:31.312 "zone_management": false, 00:17:31.312 "zone_append": false, 00:17:31.312 "compare": false, 00:17:31.312 "compare_and_write": false, 00:17:31.312 "abort": true, 00:17:31.312 "seek_hole": false, 00:17:31.312 "seek_data": false, 00:17:31.312 "copy": true, 00:17:31.312 "nvme_iov_md": false 00:17:31.312 }, 00:17:31.312 "memory_domains": [ 00:17:31.312 { 00:17:31.312 "dma_device_id": "system", 00:17:31.312 "dma_device_type": 1 00:17:31.312 }, 00:17:31.312 { 00:17:31.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.312 "dma_device_type": 2 00:17:31.312 } 00:17:31.312 ], 00:17:31.312 "driver_specific": {} 00:17:31.312 } 00:17:31.312 ] 00:17:31.312 18:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:31.312 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:31.312 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:31.312 18:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:31.571 BaseBdev3 00:17:31.571 18:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:31.571 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:31.571 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:31.571 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:31.571 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:31.571 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:31.571 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:31.830 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:31.830 [ 00:17:31.830 { 00:17:31.830 "name": "BaseBdev3", 00:17:31.830 "aliases": [ 00:17:31.830 "fb86e15e-c097-45cc-a339-c569d213d6d5" 00:17:31.830 ], 00:17:31.830 "product_name": "Malloc disk", 00:17:31.830 "block_size": 512, 00:17:31.830 "num_blocks": 65536, 00:17:31.830 "uuid": "fb86e15e-c097-45cc-a339-c569d213d6d5", 00:17:31.830 "assigned_rate_limits": { 00:17:31.830 "rw_ios_per_sec": 0, 00:17:31.830 "rw_mbytes_per_sec": 0, 00:17:31.830 "r_mbytes_per_sec": 0, 00:17:31.830 "w_mbytes_per_sec": 0 00:17:31.830 }, 00:17:31.830 "claimed": false, 00:17:31.830 "zoned": false, 00:17:31.830 "supported_io_types": { 00:17:31.830 "read": true, 00:17:31.830 "write": true, 00:17:31.830 "unmap": true, 00:17:31.830 "flush": true, 00:17:31.830 "reset": true, 00:17:31.830 "nvme_admin": false, 00:17:31.830 "nvme_io": false, 00:17:31.830 "nvme_io_md": false, 00:17:31.830 "write_zeroes": true, 00:17:31.830 "zcopy": true, 00:17:31.830 "get_zone_info": false, 00:17:31.830 "zone_management": false, 00:17:31.830 "zone_append": false, 00:17:31.830 "compare": false, 00:17:31.830 "compare_and_write": false, 00:17:31.830 "abort": true, 00:17:31.830 "seek_hole": false, 00:17:31.830 "seek_data": false, 00:17:31.830 "copy": true, 00:17:31.830 "nvme_iov_md": false 00:17:31.830 }, 00:17:31.830 "memory_domains": [ 00:17:31.830 { 00:17:31.830 "dma_device_id": "system", 00:17:31.830 "dma_device_type": 1 00:17:31.830 }, 00:17:31.830 { 00:17:31.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.831 "dma_device_type": 2 00:17:31.831 } 00:17:31.831 ], 00:17:31.831 "driver_specific": {} 00:17:31.831 } 00:17:31.831 ] 00:17:31.831 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:31.831 18:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:31.831 18:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:31.831 18:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:32.089 BaseBdev4 00:17:32.089 18:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:32.089 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:32.089 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:32.089 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:32.089 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:32.089 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:32.089 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:32.348 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:32.348 [ 00:17:32.348 { 00:17:32.348 "name": "BaseBdev4", 00:17:32.348 "aliases": [ 00:17:32.348 "dfa4df81-9394-4be7-9a80-3badb89e3ab3" 00:17:32.348 ], 00:17:32.348 "product_name": "Malloc disk", 00:17:32.348 "block_size": 512, 00:17:32.348 "num_blocks": 65536, 00:17:32.348 "uuid": "dfa4df81-9394-4be7-9a80-3badb89e3ab3", 00:17:32.348 "assigned_rate_limits": { 00:17:32.348 "rw_ios_per_sec": 0, 00:17:32.348 "rw_mbytes_per_sec": 0, 00:17:32.348 "r_mbytes_per_sec": 0, 00:17:32.348 "w_mbytes_per_sec": 0 00:17:32.348 }, 00:17:32.348 "claimed": false, 00:17:32.348 "zoned": false, 00:17:32.348 "supported_io_types": { 00:17:32.348 "read": true, 00:17:32.348 "write": true, 00:17:32.348 "unmap": true, 00:17:32.348 "flush": true, 00:17:32.348 "reset": true, 00:17:32.348 "nvme_admin": false, 00:17:32.348 "nvme_io": false, 00:17:32.348 "nvme_io_md": false, 00:17:32.348 "write_zeroes": true, 00:17:32.348 "zcopy": true, 00:17:32.348 "get_zone_info": false, 00:17:32.348 "zone_management": false, 00:17:32.348 "zone_append": false, 00:17:32.348 "compare": false, 00:17:32.348 "compare_and_write": false, 00:17:32.348 "abort": true, 00:17:32.348 "seek_hole": false, 00:17:32.348 "seek_data": false, 00:17:32.348 "copy": true, 00:17:32.348 "nvme_iov_md": false 00:17:32.348 }, 00:17:32.348 "memory_domains": [ 00:17:32.348 { 00:17:32.348 "dma_device_id": "system", 00:17:32.348 "dma_device_type": 1 00:17:32.348 }, 00:17:32.348 { 00:17:32.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.348 "dma_device_type": 2 00:17:32.348 } 00:17:32.348 ], 00:17:32.348 "driver_specific": {} 00:17:32.348 } 00:17:32.348 ] 00:17:32.348 18:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:32.348 18:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:32.348 18:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:32.348 18:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:32.607 [2024-07-24 18:20:41.039959] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:32.607 [2024-07-24 18:20:41.039993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:32.607 [2024-07-24 18:20:41.040005] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:32.607 [2024-07-24 18:20:41.040920] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:32.608 [2024-07-24 18:20:41.040949] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:32.608 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:32.608 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.608 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:32.608 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:32.608 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:32.608 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:32.608 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.608 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.608 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.608 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.608 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.608 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.867 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.867 "name": "Existed_Raid", 00:17:32.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.867 "strip_size_kb": 0, 00:17:32.867 "state": "configuring", 00:17:32.867 "raid_level": "raid1", 00:17:32.867 "superblock": false, 00:17:32.867 "num_base_bdevs": 4, 00:17:32.867 "num_base_bdevs_discovered": 3, 00:17:32.867 "num_base_bdevs_operational": 4, 00:17:32.867 "base_bdevs_list": [ 00:17:32.867 { 00:17:32.867 "name": "BaseBdev1", 00:17:32.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.867 "is_configured": false, 00:17:32.867 "data_offset": 0, 00:17:32.867 "data_size": 0 00:17:32.867 }, 00:17:32.867 { 00:17:32.867 "name": "BaseBdev2", 00:17:32.867 "uuid": "46251d32-6205-47fd-acab-926ed57b309d", 00:17:32.867 "is_configured": true, 00:17:32.867 "data_offset": 0, 00:17:32.867 "data_size": 65536 00:17:32.867 }, 00:17:32.867 { 00:17:32.867 "name": "BaseBdev3", 00:17:32.867 "uuid": "fb86e15e-c097-45cc-a339-c569d213d6d5", 00:17:32.867 "is_configured": true, 00:17:32.867 "data_offset": 0, 00:17:32.867 "data_size": 65536 00:17:32.867 }, 00:17:32.867 { 00:17:32.867 "name": "BaseBdev4", 00:17:32.867 "uuid": "dfa4df81-9394-4be7-9a80-3badb89e3ab3", 00:17:32.867 "is_configured": true, 00:17:32.867 "data_offset": 0, 00:17:32.867 "data_size": 65536 00:17:32.867 } 00:17:32.867 ] 00:17:32.867 }' 00:17:32.867 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.867 18:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.126 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:33.385 [2024-07-24 18:20:41.846025] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.385 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:33.385 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:33.385 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:33.385 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:33.385 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:33.385 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:33.385 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.385 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.385 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.385 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.385 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.385 18:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.643 18:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.643 "name": "Existed_Raid", 00:17:33.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.643 "strip_size_kb": 0, 00:17:33.643 "state": "configuring", 00:17:33.643 "raid_level": "raid1", 00:17:33.643 "superblock": false, 00:17:33.643 "num_base_bdevs": 4, 00:17:33.643 "num_base_bdevs_discovered": 2, 00:17:33.643 "num_base_bdevs_operational": 4, 00:17:33.643 "base_bdevs_list": [ 00:17:33.643 { 00:17:33.643 "name": "BaseBdev1", 00:17:33.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.643 "is_configured": false, 00:17:33.643 "data_offset": 0, 00:17:33.643 "data_size": 0 00:17:33.643 }, 00:17:33.643 { 00:17:33.643 "name": null, 00:17:33.643 "uuid": "46251d32-6205-47fd-acab-926ed57b309d", 00:17:33.643 "is_configured": false, 00:17:33.643 "data_offset": 0, 00:17:33.643 "data_size": 65536 00:17:33.643 }, 00:17:33.643 { 00:17:33.643 "name": "BaseBdev3", 00:17:33.643 "uuid": "fb86e15e-c097-45cc-a339-c569d213d6d5", 00:17:33.643 "is_configured": true, 00:17:33.643 "data_offset": 0, 00:17:33.643 "data_size": 65536 00:17:33.643 }, 00:17:33.643 { 00:17:33.643 "name": "BaseBdev4", 00:17:33.643 "uuid": "dfa4df81-9394-4be7-9a80-3badb89e3ab3", 00:17:33.644 "is_configured": true, 00:17:33.644 "data_offset": 0, 00:17:33.644 "data_size": 65536 00:17:33.644 } 00:17:33.644 ] 00:17:33.644 }' 00:17:33.644 18:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.644 18:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.212 18:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.212 18:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:34.212 18:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:34.212 18:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:34.471 [2024-07-24 18:20:42.887536] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.471 BaseBdev1 00:17:34.471 18:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:34.471 18:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:34.471 18:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:34.471 18:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:34.471 18:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:34.471 18:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:34.471 18:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:34.731 [ 00:17:34.731 { 00:17:34.731 "name": "BaseBdev1", 00:17:34.731 "aliases": [ 00:17:34.731 "1ad83961-109f-4126-b43d-a777bd0269a7" 00:17:34.731 ], 00:17:34.731 "product_name": "Malloc disk", 00:17:34.731 "block_size": 512, 00:17:34.731 "num_blocks": 65536, 00:17:34.731 "uuid": "1ad83961-109f-4126-b43d-a777bd0269a7", 00:17:34.731 "assigned_rate_limits": { 00:17:34.731 "rw_ios_per_sec": 0, 00:17:34.731 "rw_mbytes_per_sec": 0, 00:17:34.731 "r_mbytes_per_sec": 0, 00:17:34.731 "w_mbytes_per_sec": 0 00:17:34.731 }, 00:17:34.731 "claimed": true, 00:17:34.731 "claim_type": "exclusive_write", 00:17:34.731 "zoned": false, 00:17:34.731 "supported_io_types": { 00:17:34.731 "read": true, 00:17:34.731 "write": true, 00:17:34.731 "unmap": true, 00:17:34.731 "flush": true, 00:17:34.731 "reset": true, 00:17:34.731 "nvme_admin": false, 00:17:34.731 "nvme_io": false, 00:17:34.731 "nvme_io_md": false, 00:17:34.731 "write_zeroes": true, 00:17:34.731 "zcopy": true, 00:17:34.731 "get_zone_info": false, 00:17:34.731 "zone_management": false, 00:17:34.731 "zone_append": false, 00:17:34.731 "compare": false, 00:17:34.731 "compare_and_write": false, 00:17:34.731 "abort": true, 00:17:34.731 "seek_hole": false, 00:17:34.731 "seek_data": false, 00:17:34.731 "copy": true, 00:17:34.731 "nvme_iov_md": false 00:17:34.731 }, 00:17:34.731 "memory_domains": [ 00:17:34.731 { 00:17:34.731 "dma_device_id": "system", 00:17:34.731 "dma_device_type": 1 00:17:34.731 }, 00:17:34.731 { 00:17:34.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.731 "dma_device_type": 2 00:17:34.731 } 00:17:34.731 ], 00:17:34.731 "driver_specific": {} 00:17:34.731 } 00:17:34.731 ] 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.731 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.989 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.989 "name": "Existed_Raid", 00:17:34.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.989 "strip_size_kb": 0, 00:17:34.989 "state": "configuring", 00:17:34.989 "raid_level": "raid1", 00:17:34.989 "superblock": false, 00:17:34.989 "num_base_bdevs": 4, 00:17:34.989 "num_base_bdevs_discovered": 3, 00:17:34.989 "num_base_bdevs_operational": 4, 00:17:34.989 "base_bdevs_list": [ 00:17:34.989 { 00:17:34.989 "name": "BaseBdev1", 00:17:34.989 "uuid": "1ad83961-109f-4126-b43d-a777bd0269a7", 00:17:34.989 "is_configured": true, 00:17:34.989 "data_offset": 0, 00:17:34.989 "data_size": 65536 00:17:34.989 }, 00:17:34.989 { 00:17:34.989 "name": null, 00:17:34.989 "uuid": "46251d32-6205-47fd-acab-926ed57b309d", 00:17:34.989 "is_configured": false, 00:17:34.989 "data_offset": 0, 00:17:34.989 "data_size": 65536 00:17:34.989 }, 00:17:34.989 { 00:17:34.989 "name": "BaseBdev3", 00:17:34.989 "uuid": "fb86e15e-c097-45cc-a339-c569d213d6d5", 00:17:34.989 "is_configured": true, 00:17:34.989 "data_offset": 0, 00:17:34.989 "data_size": 65536 00:17:34.989 }, 00:17:34.989 { 00:17:34.989 "name": "BaseBdev4", 00:17:34.989 "uuid": "dfa4df81-9394-4be7-9a80-3badb89e3ab3", 00:17:34.989 "is_configured": true, 00:17:34.989 "data_offset": 0, 00:17:34.989 "data_size": 65536 00:17:34.989 } 00:17:34.989 ] 00:17:34.989 }' 00:17:34.989 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.989 18:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.556 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.556 18:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:35.556 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:35.556 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:35.815 [2024-07-24 18:20:44.259090] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:35.815 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:35.815 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:35.815 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:35.815 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:35.815 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:35.815 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:35.815 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.815 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.815 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.815 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.815 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.815 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.074 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.074 "name": "Existed_Raid", 00:17:36.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.074 "strip_size_kb": 0, 00:17:36.074 "state": "configuring", 00:17:36.074 "raid_level": "raid1", 00:17:36.074 "superblock": false, 00:17:36.074 "num_base_bdevs": 4, 00:17:36.074 "num_base_bdevs_discovered": 2, 00:17:36.074 "num_base_bdevs_operational": 4, 00:17:36.074 "base_bdevs_list": [ 00:17:36.074 { 00:17:36.074 "name": "BaseBdev1", 00:17:36.074 "uuid": "1ad83961-109f-4126-b43d-a777bd0269a7", 00:17:36.074 "is_configured": true, 00:17:36.074 "data_offset": 0, 00:17:36.074 "data_size": 65536 00:17:36.075 }, 00:17:36.075 { 00:17:36.075 "name": null, 00:17:36.075 "uuid": "46251d32-6205-47fd-acab-926ed57b309d", 00:17:36.075 "is_configured": false, 00:17:36.075 "data_offset": 0, 00:17:36.075 "data_size": 65536 00:17:36.075 }, 00:17:36.075 { 00:17:36.075 "name": null, 00:17:36.075 "uuid": "fb86e15e-c097-45cc-a339-c569d213d6d5", 00:17:36.075 "is_configured": false, 00:17:36.075 "data_offset": 0, 00:17:36.075 "data_size": 65536 00:17:36.075 }, 00:17:36.075 { 00:17:36.075 "name": "BaseBdev4", 00:17:36.075 "uuid": "dfa4df81-9394-4be7-9a80-3badb89e3ab3", 00:17:36.075 "is_configured": true, 00:17:36.075 "data_offset": 0, 00:17:36.075 "data_size": 65536 00:17:36.075 } 00:17:36.075 ] 00:17:36.075 }' 00:17:36.075 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.075 18:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.333 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:36.333 18:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.592 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:36.593 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:36.852 [2024-07-24 18:20:45.225599] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.852 "name": "Existed_Raid", 00:17:36.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.852 "strip_size_kb": 0, 00:17:36.852 "state": "configuring", 00:17:36.852 "raid_level": "raid1", 00:17:36.852 "superblock": false, 00:17:36.852 "num_base_bdevs": 4, 00:17:36.852 "num_base_bdevs_discovered": 3, 00:17:36.852 "num_base_bdevs_operational": 4, 00:17:36.852 "base_bdevs_list": [ 00:17:36.852 { 00:17:36.852 "name": "BaseBdev1", 00:17:36.852 "uuid": "1ad83961-109f-4126-b43d-a777bd0269a7", 00:17:36.852 "is_configured": true, 00:17:36.852 "data_offset": 0, 00:17:36.852 "data_size": 65536 00:17:36.852 }, 00:17:36.852 { 00:17:36.852 "name": null, 00:17:36.852 "uuid": "46251d32-6205-47fd-acab-926ed57b309d", 00:17:36.852 "is_configured": false, 00:17:36.852 "data_offset": 0, 00:17:36.852 "data_size": 65536 00:17:36.852 }, 00:17:36.852 { 00:17:36.852 "name": "BaseBdev3", 00:17:36.852 "uuid": "fb86e15e-c097-45cc-a339-c569d213d6d5", 00:17:36.852 "is_configured": true, 00:17:36.852 "data_offset": 0, 00:17:36.852 "data_size": 65536 00:17:36.852 }, 00:17:36.852 { 00:17:36.852 "name": "BaseBdev4", 00:17:36.852 "uuid": "dfa4df81-9394-4be7-9a80-3badb89e3ab3", 00:17:36.852 "is_configured": true, 00:17:36.852 "data_offset": 0, 00:17:36.852 "data_size": 65536 00:17:36.852 } 00:17:36.852 ] 00:17:36.852 }' 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.852 18:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.421 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.421 18:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:37.680 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:37.680 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:37.680 [2024-07-24 18:20:46.184087] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.680 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:37.680 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:37.680 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:37.680 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:37.680 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:37.680 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:37.680 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.680 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.680 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.680 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.681 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.681 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.940 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.940 "name": "Existed_Raid", 00:17:37.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.940 "strip_size_kb": 0, 00:17:37.940 "state": "configuring", 00:17:37.940 "raid_level": "raid1", 00:17:37.940 "superblock": false, 00:17:37.940 "num_base_bdevs": 4, 00:17:37.940 "num_base_bdevs_discovered": 2, 00:17:37.940 "num_base_bdevs_operational": 4, 00:17:37.940 "base_bdevs_list": [ 00:17:37.940 { 00:17:37.940 "name": null, 00:17:37.940 "uuid": "1ad83961-109f-4126-b43d-a777bd0269a7", 00:17:37.940 "is_configured": false, 00:17:37.940 "data_offset": 0, 00:17:37.940 "data_size": 65536 00:17:37.940 }, 00:17:37.940 { 00:17:37.940 "name": null, 00:17:37.940 "uuid": "46251d32-6205-47fd-acab-926ed57b309d", 00:17:37.940 "is_configured": false, 00:17:37.940 "data_offset": 0, 00:17:37.940 "data_size": 65536 00:17:37.940 }, 00:17:37.940 { 00:17:37.940 "name": "BaseBdev3", 00:17:37.940 "uuid": "fb86e15e-c097-45cc-a339-c569d213d6d5", 00:17:37.940 "is_configured": true, 00:17:37.940 "data_offset": 0, 00:17:37.940 "data_size": 65536 00:17:37.940 }, 00:17:37.940 { 00:17:37.940 "name": "BaseBdev4", 00:17:37.940 "uuid": "dfa4df81-9394-4be7-9a80-3badb89e3ab3", 00:17:37.940 "is_configured": true, 00:17:37.940 "data_offset": 0, 00:17:37.940 "data_size": 65536 00:17:37.940 } 00:17:37.940 ] 00:17:37.940 }' 00:17:37.940 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.940 18:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.550 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:38.550 18:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.550 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:38.550 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:38.809 [2024-07-24 18:20:47.164280] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.809 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:38.809 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.809 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:38.809 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:38.809 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:38.809 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:38.809 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.809 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.809 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.809 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.809 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.809 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.809 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.809 "name": "Existed_Raid", 00:17:38.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.809 "strip_size_kb": 0, 00:17:38.809 "state": "configuring", 00:17:38.810 "raid_level": "raid1", 00:17:38.810 "superblock": false, 00:17:38.810 "num_base_bdevs": 4, 00:17:38.810 "num_base_bdevs_discovered": 3, 00:17:38.810 "num_base_bdevs_operational": 4, 00:17:38.810 "base_bdevs_list": [ 00:17:38.810 { 00:17:38.810 "name": null, 00:17:38.810 "uuid": "1ad83961-109f-4126-b43d-a777bd0269a7", 00:17:38.810 "is_configured": false, 00:17:38.810 "data_offset": 0, 00:17:38.810 "data_size": 65536 00:17:38.810 }, 00:17:38.810 { 00:17:38.810 "name": "BaseBdev2", 00:17:38.810 "uuid": "46251d32-6205-47fd-acab-926ed57b309d", 00:17:38.810 "is_configured": true, 00:17:38.810 "data_offset": 0, 00:17:38.810 "data_size": 65536 00:17:38.810 }, 00:17:38.810 { 00:17:38.810 "name": "BaseBdev3", 00:17:38.810 "uuid": "fb86e15e-c097-45cc-a339-c569d213d6d5", 00:17:38.810 "is_configured": true, 00:17:38.810 "data_offset": 0, 00:17:38.810 "data_size": 65536 00:17:38.810 }, 00:17:38.810 { 00:17:38.810 "name": "BaseBdev4", 00:17:38.810 "uuid": "dfa4df81-9394-4be7-9a80-3badb89e3ab3", 00:17:38.810 "is_configured": true, 00:17:38.810 "data_offset": 0, 00:17:38.810 "data_size": 65536 00:17:38.810 } 00:17:38.810 ] 00:17:38.810 }' 00:17:38.810 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.810 18:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.378 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.378 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:39.636 18:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:39.636 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.636 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:39.636 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1ad83961-109f-4126-b43d-a777bd0269a7 00:17:39.895 [2024-07-24 18:20:48.313990] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:39.895 [2024-07-24 18:20:48.314024] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x19a8450 00:17:39.895 [2024-07-24 18:20:48.314030] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:39.895 [2024-07-24 18:20:48.314165] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19a9840 00:17:39.895 [2024-07-24 18:20:48.314253] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19a8450 00:17:39.895 [2024-07-24 18:20:48.314259] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x19a8450 00:17:39.895 [2024-07-24 18:20:48.314377] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.895 NewBaseBdev 00:17:39.895 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:39.895 18:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:39.895 18:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:39.895 18:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:39.895 18:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:39.895 18:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:39.895 18:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:39.895 18:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:40.155 [ 00:17:40.155 { 00:17:40.155 "name": "NewBaseBdev", 00:17:40.155 "aliases": [ 00:17:40.155 "1ad83961-109f-4126-b43d-a777bd0269a7" 00:17:40.155 ], 00:17:40.155 "product_name": "Malloc disk", 00:17:40.155 "block_size": 512, 00:17:40.155 "num_blocks": 65536, 00:17:40.155 "uuid": "1ad83961-109f-4126-b43d-a777bd0269a7", 00:17:40.155 "assigned_rate_limits": { 00:17:40.155 "rw_ios_per_sec": 0, 00:17:40.155 "rw_mbytes_per_sec": 0, 00:17:40.155 "r_mbytes_per_sec": 0, 00:17:40.155 "w_mbytes_per_sec": 0 00:17:40.155 }, 00:17:40.155 "claimed": true, 00:17:40.155 "claim_type": "exclusive_write", 00:17:40.155 "zoned": false, 00:17:40.155 "supported_io_types": { 00:17:40.155 "read": true, 00:17:40.155 "write": true, 00:17:40.155 "unmap": true, 00:17:40.155 "flush": true, 00:17:40.155 "reset": true, 00:17:40.155 "nvme_admin": false, 00:17:40.155 "nvme_io": false, 00:17:40.155 "nvme_io_md": false, 00:17:40.155 "write_zeroes": true, 00:17:40.155 "zcopy": true, 00:17:40.155 "get_zone_info": false, 00:17:40.155 "zone_management": false, 00:17:40.155 "zone_append": false, 00:17:40.155 "compare": false, 00:17:40.155 "compare_and_write": false, 00:17:40.155 "abort": true, 00:17:40.155 "seek_hole": false, 00:17:40.155 "seek_data": false, 00:17:40.155 "copy": true, 00:17:40.155 "nvme_iov_md": false 00:17:40.155 }, 00:17:40.155 "memory_domains": [ 00:17:40.155 { 00:17:40.155 "dma_device_id": "system", 00:17:40.155 "dma_device_type": 1 00:17:40.155 }, 00:17:40.155 { 00:17:40.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.155 "dma_device_type": 2 00:17:40.155 } 00:17:40.155 ], 00:17:40.155 "driver_specific": {} 00:17:40.155 } 00:17:40.155 ] 00:17:40.155 18:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:40.155 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:40.155 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:40.155 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:40.155 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:40.155 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:40.155 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:40.155 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.155 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.155 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.155 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.155 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.155 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.414 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.414 "name": "Existed_Raid", 00:17:40.414 "uuid": "bc99748e-c5a1-4881-b913-4ba29d25c992", 00:17:40.414 "strip_size_kb": 0, 00:17:40.414 "state": "online", 00:17:40.414 "raid_level": "raid1", 00:17:40.414 "superblock": false, 00:17:40.414 "num_base_bdevs": 4, 00:17:40.414 "num_base_bdevs_discovered": 4, 00:17:40.414 "num_base_bdevs_operational": 4, 00:17:40.414 "base_bdevs_list": [ 00:17:40.414 { 00:17:40.414 "name": "NewBaseBdev", 00:17:40.414 "uuid": "1ad83961-109f-4126-b43d-a777bd0269a7", 00:17:40.414 "is_configured": true, 00:17:40.414 "data_offset": 0, 00:17:40.414 "data_size": 65536 00:17:40.414 }, 00:17:40.414 { 00:17:40.414 "name": "BaseBdev2", 00:17:40.414 "uuid": "46251d32-6205-47fd-acab-926ed57b309d", 00:17:40.414 "is_configured": true, 00:17:40.414 "data_offset": 0, 00:17:40.414 "data_size": 65536 00:17:40.414 }, 00:17:40.414 { 00:17:40.414 "name": "BaseBdev3", 00:17:40.414 "uuid": "fb86e15e-c097-45cc-a339-c569d213d6d5", 00:17:40.414 "is_configured": true, 00:17:40.414 "data_offset": 0, 00:17:40.414 "data_size": 65536 00:17:40.414 }, 00:17:40.414 { 00:17:40.414 "name": "BaseBdev4", 00:17:40.414 "uuid": "dfa4df81-9394-4be7-9a80-3badb89e3ab3", 00:17:40.414 "is_configured": true, 00:17:40.414 "data_offset": 0, 00:17:40.414 "data_size": 65536 00:17:40.414 } 00:17:40.414 ] 00:17:40.414 }' 00:17:40.414 18:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.414 18:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:40.982 [2024-07-24 18:20:49.437070] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:40.982 "name": "Existed_Raid", 00:17:40.982 "aliases": [ 00:17:40.982 "bc99748e-c5a1-4881-b913-4ba29d25c992" 00:17:40.982 ], 00:17:40.982 "product_name": "Raid Volume", 00:17:40.982 "block_size": 512, 00:17:40.982 "num_blocks": 65536, 00:17:40.982 "uuid": "bc99748e-c5a1-4881-b913-4ba29d25c992", 00:17:40.982 "assigned_rate_limits": { 00:17:40.982 "rw_ios_per_sec": 0, 00:17:40.982 "rw_mbytes_per_sec": 0, 00:17:40.982 "r_mbytes_per_sec": 0, 00:17:40.982 "w_mbytes_per_sec": 0 00:17:40.982 }, 00:17:40.982 "claimed": false, 00:17:40.982 "zoned": false, 00:17:40.982 "supported_io_types": { 00:17:40.982 "read": true, 00:17:40.982 "write": true, 00:17:40.982 "unmap": false, 00:17:40.982 "flush": false, 00:17:40.982 "reset": true, 00:17:40.982 "nvme_admin": false, 00:17:40.982 "nvme_io": false, 00:17:40.982 "nvme_io_md": false, 00:17:40.982 "write_zeroes": true, 00:17:40.982 "zcopy": false, 00:17:40.982 "get_zone_info": false, 00:17:40.982 "zone_management": false, 00:17:40.982 "zone_append": false, 00:17:40.982 "compare": false, 00:17:40.982 "compare_and_write": false, 00:17:40.982 "abort": false, 00:17:40.982 "seek_hole": false, 00:17:40.982 "seek_data": false, 00:17:40.982 "copy": false, 00:17:40.982 "nvme_iov_md": false 00:17:40.982 }, 00:17:40.982 "memory_domains": [ 00:17:40.982 { 00:17:40.982 "dma_device_id": "system", 00:17:40.982 "dma_device_type": 1 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.982 "dma_device_type": 2 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "dma_device_id": "system", 00:17:40.982 "dma_device_type": 1 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.982 "dma_device_type": 2 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "dma_device_id": "system", 00:17:40.982 "dma_device_type": 1 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.982 "dma_device_type": 2 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "dma_device_id": "system", 00:17:40.982 "dma_device_type": 1 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.982 "dma_device_type": 2 00:17:40.982 } 00:17:40.982 ], 00:17:40.982 "driver_specific": { 00:17:40.982 "raid": { 00:17:40.982 "uuid": "bc99748e-c5a1-4881-b913-4ba29d25c992", 00:17:40.982 "strip_size_kb": 0, 00:17:40.982 "state": "online", 00:17:40.982 "raid_level": "raid1", 00:17:40.982 "superblock": false, 00:17:40.982 "num_base_bdevs": 4, 00:17:40.982 "num_base_bdevs_discovered": 4, 00:17:40.982 "num_base_bdevs_operational": 4, 00:17:40.982 "base_bdevs_list": [ 00:17:40.982 { 00:17:40.982 "name": "NewBaseBdev", 00:17:40.982 "uuid": "1ad83961-109f-4126-b43d-a777bd0269a7", 00:17:40.982 "is_configured": true, 00:17:40.982 "data_offset": 0, 00:17:40.982 "data_size": 65536 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "name": "BaseBdev2", 00:17:40.982 "uuid": "46251d32-6205-47fd-acab-926ed57b309d", 00:17:40.982 "is_configured": true, 00:17:40.982 "data_offset": 0, 00:17:40.982 "data_size": 65536 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "name": "BaseBdev3", 00:17:40.982 "uuid": "fb86e15e-c097-45cc-a339-c569d213d6d5", 00:17:40.982 "is_configured": true, 00:17:40.982 "data_offset": 0, 00:17:40.982 "data_size": 65536 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "name": "BaseBdev4", 00:17:40.982 "uuid": "dfa4df81-9394-4be7-9a80-3badb89e3ab3", 00:17:40.982 "is_configured": true, 00:17:40.982 "data_offset": 0, 00:17:40.982 "data_size": 65536 00:17:40.982 } 00:17:40.982 ] 00:17:40.982 } 00:17:40.982 } 00:17:40.982 }' 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:40.982 BaseBdev2 00:17:40.982 BaseBdev3 00:17:40.982 BaseBdev4' 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:40.982 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:41.241 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:41.241 "name": "NewBaseBdev", 00:17:41.241 "aliases": [ 00:17:41.241 "1ad83961-109f-4126-b43d-a777bd0269a7" 00:17:41.241 ], 00:17:41.241 "product_name": "Malloc disk", 00:17:41.241 "block_size": 512, 00:17:41.241 "num_blocks": 65536, 00:17:41.241 "uuid": "1ad83961-109f-4126-b43d-a777bd0269a7", 00:17:41.241 "assigned_rate_limits": { 00:17:41.241 "rw_ios_per_sec": 0, 00:17:41.241 "rw_mbytes_per_sec": 0, 00:17:41.241 "r_mbytes_per_sec": 0, 00:17:41.241 "w_mbytes_per_sec": 0 00:17:41.241 }, 00:17:41.241 "claimed": true, 00:17:41.241 "claim_type": "exclusive_write", 00:17:41.241 "zoned": false, 00:17:41.241 "supported_io_types": { 00:17:41.241 "read": true, 00:17:41.241 "write": true, 00:17:41.241 "unmap": true, 00:17:41.241 "flush": true, 00:17:41.241 "reset": true, 00:17:41.241 "nvme_admin": false, 00:17:41.241 "nvme_io": false, 00:17:41.241 "nvme_io_md": false, 00:17:41.241 "write_zeroes": true, 00:17:41.241 "zcopy": true, 00:17:41.241 "get_zone_info": false, 00:17:41.241 "zone_management": false, 00:17:41.241 "zone_append": false, 00:17:41.241 "compare": false, 00:17:41.241 "compare_and_write": false, 00:17:41.241 "abort": true, 00:17:41.241 "seek_hole": false, 00:17:41.241 "seek_data": false, 00:17:41.241 "copy": true, 00:17:41.242 "nvme_iov_md": false 00:17:41.242 }, 00:17:41.242 "memory_domains": [ 00:17:41.242 { 00:17:41.242 "dma_device_id": "system", 00:17:41.242 "dma_device_type": 1 00:17:41.242 }, 00:17:41.242 { 00:17:41.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.242 "dma_device_type": 2 00:17:41.242 } 00:17:41.242 ], 00:17:41.242 "driver_specific": {} 00:17:41.242 }' 00:17:41.242 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.242 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.242 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:41.242 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.242 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.242 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:41.242 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.500 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.500 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:41.501 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.501 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.501 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:41.501 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:41.501 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:41.501 18:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:41.760 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:41.760 "name": "BaseBdev2", 00:17:41.760 "aliases": [ 00:17:41.760 "46251d32-6205-47fd-acab-926ed57b309d" 00:17:41.760 ], 00:17:41.760 "product_name": "Malloc disk", 00:17:41.760 "block_size": 512, 00:17:41.760 "num_blocks": 65536, 00:17:41.760 "uuid": "46251d32-6205-47fd-acab-926ed57b309d", 00:17:41.760 "assigned_rate_limits": { 00:17:41.760 "rw_ios_per_sec": 0, 00:17:41.760 "rw_mbytes_per_sec": 0, 00:17:41.760 "r_mbytes_per_sec": 0, 00:17:41.760 "w_mbytes_per_sec": 0 00:17:41.760 }, 00:17:41.760 "claimed": true, 00:17:41.760 "claim_type": "exclusive_write", 00:17:41.760 "zoned": false, 00:17:41.760 "supported_io_types": { 00:17:41.760 "read": true, 00:17:41.760 "write": true, 00:17:41.760 "unmap": true, 00:17:41.760 "flush": true, 00:17:41.760 "reset": true, 00:17:41.760 "nvme_admin": false, 00:17:41.760 "nvme_io": false, 00:17:41.760 "nvme_io_md": false, 00:17:41.760 "write_zeroes": true, 00:17:41.760 "zcopy": true, 00:17:41.760 "get_zone_info": false, 00:17:41.760 "zone_management": false, 00:17:41.760 "zone_append": false, 00:17:41.760 "compare": false, 00:17:41.760 "compare_and_write": false, 00:17:41.760 "abort": true, 00:17:41.760 "seek_hole": false, 00:17:41.760 "seek_data": false, 00:17:41.760 "copy": true, 00:17:41.760 "nvme_iov_md": false 00:17:41.760 }, 00:17:41.760 "memory_domains": [ 00:17:41.760 { 00:17:41.760 "dma_device_id": "system", 00:17:41.760 "dma_device_type": 1 00:17:41.760 }, 00:17:41.760 { 00:17:41.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.760 "dma_device_type": 2 00:17:41.760 } 00:17:41.760 ], 00:17:41.760 "driver_specific": {} 00:17:41.760 }' 00:17:41.760 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.760 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.760 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:41.760 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.760 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.760 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:41.760 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.019 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.019 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:42.019 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.019 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.019 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:42.019 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.019 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:42.019 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.278 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.278 "name": "BaseBdev3", 00:17:42.278 "aliases": [ 00:17:42.278 "fb86e15e-c097-45cc-a339-c569d213d6d5" 00:17:42.278 ], 00:17:42.278 "product_name": "Malloc disk", 00:17:42.278 "block_size": 512, 00:17:42.278 "num_blocks": 65536, 00:17:42.278 "uuid": "fb86e15e-c097-45cc-a339-c569d213d6d5", 00:17:42.278 "assigned_rate_limits": { 00:17:42.278 "rw_ios_per_sec": 0, 00:17:42.278 "rw_mbytes_per_sec": 0, 00:17:42.278 "r_mbytes_per_sec": 0, 00:17:42.278 "w_mbytes_per_sec": 0 00:17:42.278 }, 00:17:42.278 "claimed": true, 00:17:42.278 "claim_type": "exclusive_write", 00:17:42.278 "zoned": false, 00:17:42.278 "supported_io_types": { 00:17:42.278 "read": true, 00:17:42.278 "write": true, 00:17:42.278 "unmap": true, 00:17:42.278 "flush": true, 00:17:42.278 "reset": true, 00:17:42.278 "nvme_admin": false, 00:17:42.278 "nvme_io": false, 00:17:42.278 "nvme_io_md": false, 00:17:42.278 "write_zeroes": true, 00:17:42.278 "zcopy": true, 00:17:42.278 "get_zone_info": false, 00:17:42.278 "zone_management": false, 00:17:42.278 "zone_append": false, 00:17:42.278 "compare": false, 00:17:42.278 "compare_and_write": false, 00:17:42.278 "abort": true, 00:17:42.278 "seek_hole": false, 00:17:42.278 "seek_data": false, 00:17:42.278 "copy": true, 00:17:42.278 "nvme_iov_md": false 00:17:42.278 }, 00:17:42.278 "memory_domains": [ 00:17:42.278 { 00:17:42.278 "dma_device_id": "system", 00:17:42.278 "dma_device_type": 1 00:17:42.278 }, 00:17:42.278 { 00:17:42.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.278 "dma_device_type": 2 00:17:42.278 } 00:17:42.278 ], 00:17:42.278 "driver_specific": {} 00:17:42.278 }' 00:17:42.278 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.278 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.278 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:42.278 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.278 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.278 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:42.278 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.278 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.278 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:42.278 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.537 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.537 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:42.537 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.537 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:42.537 18:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.537 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.537 "name": "BaseBdev4", 00:17:42.537 "aliases": [ 00:17:42.537 "dfa4df81-9394-4be7-9a80-3badb89e3ab3" 00:17:42.537 ], 00:17:42.537 "product_name": "Malloc disk", 00:17:42.537 "block_size": 512, 00:17:42.537 "num_blocks": 65536, 00:17:42.537 "uuid": "dfa4df81-9394-4be7-9a80-3badb89e3ab3", 00:17:42.537 "assigned_rate_limits": { 00:17:42.537 "rw_ios_per_sec": 0, 00:17:42.537 "rw_mbytes_per_sec": 0, 00:17:42.537 "r_mbytes_per_sec": 0, 00:17:42.537 "w_mbytes_per_sec": 0 00:17:42.537 }, 00:17:42.537 "claimed": true, 00:17:42.537 "claim_type": "exclusive_write", 00:17:42.537 "zoned": false, 00:17:42.537 "supported_io_types": { 00:17:42.537 "read": true, 00:17:42.537 "write": true, 00:17:42.537 "unmap": true, 00:17:42.537 "flush": true, 00:17:42.537 "reset": true, 00:17:42.537 "nvme_admin": false, 00:17:42.537 "nvme_io": false, 00:17:42.537 "nvme_io_md": false, 00:17:42.537 "write_zeroes": true, 00:17:42.537 "zcopy": true, 00:17:42.537 "get_zone_info": false, 00:17:42.537 "zone_management": false, 00:17:42.537 "zone_append": false, 00:17:42.537 "compare": false, 00:17:42.537 "compare_and_write": false, 00:17:42.537 "abort": true, 00:17:42.537 "seek_hole": false, 00:17:42.537 "seek_data": false, 00:17:42.537 "copy": true, 00:17:42.537 "nvme_iov_md": false 00:17:42.537 }, 00:17:42.537 "memory_domains": [ 00:17:42.537 { 00:17:42.537 "dma_device_id": "system", 00:17:42.537 "dma_device_type": 1 00:17:42.537 }, 00:17:42.537 { 00:17:42.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.537 "dma_device_type": 2 00:17:42.537 } 00:17:42.537 ], 00:17:42.537 "driver_specific": {} 00:17:42.537 }' 00:17:42.537 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.796 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.796 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:42.796 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.796 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.796 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:42.796 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.796 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.796 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:42.796 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.796 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.796 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:42.796 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:43.056 [2024-07-24 18:20:51.538306] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.056 [2024-07-24 18:20:51.538327] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.056 [2024-07-24 18:20:51.538366] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.056 [2024-07-24 18:20:51.538556] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.056 [2024-07-24 18:20:51.538563] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19a8450 name Existed_Raid, state offline 00:17:43.056 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 2241013 00:17:43.056 18:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 2241013 ']' 00:17:43.056 18:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 2241013 00:17:43.056 18:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:43.056 18:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.056 18:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2241013 00:17:43.056 18:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.056 18:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.056 18:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2241013' 00:17:43.056 killing process with pid 2241013 00:17:43.056 18:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 2241013 00:17:43.056 [2024-07-24 18:20:51.604668] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.056 18:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 2241013 00:17:43.056 [2024-07-24 18:20:51.636019] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:43.315 18:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:43.315 00:17:43.315 real 0m24.342s 00:17:43.315 user 0m44.620s 00:17:43.315 sys 0m4.532s 00:17:43.315 18:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:43.315 18:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.315 ************************************ 00:17:43.315 END TEST raid_state_function_test 00:17:43.315 ************************************ 00:17:43.316 18:20:51 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:17:43.316 18:20:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:43.316 18:20:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:43.316 18:20:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:43.316 ************************************ 00:17:43.316 START TEST raid_state_function_test_sb 00:17:43.316 ************************************ 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=2245853 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2245853' 00:17:43.316 Process raid pid: 2245853 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 2245853 /var/tmp/spdk-raid.sock 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2245853 ']' 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:43.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.316 18:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:43.575 [2024-07-24 18:20:51.943532] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:17:43.575 [2024-07-24 18:20:51.943577] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:01.0 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:01.1 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:01.2 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:01.3 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:01.4 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:01.5 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:01.6 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:01.7 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:02.0 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:02.1 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:02.2 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:02.3 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:02.4 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:02.5 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:02.6 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b3:02.7 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:01.0 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:01.1 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:01.2 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:01.3 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:01.4 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:01.5 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:01.6 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:01.7 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:02.0 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:02.1 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:02.2 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:02.3 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:02.4 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:02.5 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:02.6 cannot be used 00:17:43.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:43.576 EAL: Requested device 0000:b5:02.7 cannot be used 00:17:43.576 [2024-07-24 18:20:52.038892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.576 [2024-07-24 18:20:52.114942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.835 [2024-07-24 18:20:52.172551] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:43.835 [2024-07-24 18:20:52.172578] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:44.404 [2024-07-24 18:20:52.879711] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:44.404 [2024-07-24 18:20:52.879740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:44.404 [2024-07-24 18:20:52.879747] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.404 [2024-07-24 18:20:52.879754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.404 [2024-07-24 18:20:52.879759] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.404 [2024-07-24 18:20:52.879766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.404 [2024-07-24 18:20:52.879771] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:44.404 [2024-07-24 18:20:52.879778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.404 18:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.664 18:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:44.664 "name": "Existed_Raid", 00:17:44.664 "uuid": "8332463d-92c9-4a5d-910b-871561441836", 00:17:44.664 "strip_size_kb": 0, 00:17:44.664 "state": "configuring", 00:17:44.664 "raid_level": "raid1", 00:17:44.664 "superblock": true, 00:17:44.664 "num_base_bdevs": 4, 00:17:44.664 "num_base_bdevs_discovered": 0, 00:17:44.664 "num_base_bdevs_operational": 4, 00:17:44.664 "base_bdevs_list": [ 00:17:44.664 { 00:17:44.664 "name": "BaseBdev1", 00:17:44.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.664 "is_configured": false, 00:17:44.664 "data_offset": 0, 00:17:44.664 "data_size": 0 00:17:44.664 }, 00:17:44.664 { 00:17:44.664 "name": "BaseBdev2", 00:17:44.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.664 "is_configured": false, 00:17:44.664 "data_offset": 0, 00:17:44.664 "data_size": 0 00:17:44.664 }, 00:17:44.664 { 00:17:44.664 "name": "BaseBdev3", 00:17:44.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.664 "is_configured": false, 00:17:44.664 "data_offset": 0, 00:17:44.664 "data_size": 0 00:17:44.664 }, 00:17:44.664 { 00:17:44.664 "name": "BaseBdev4", 00:17:44.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.664 "is_configured": false, 00:17:44.664 "data_offset": 0, 00:17:44.664 "data_size": 0 00:17:44.664 } 00:17:44.664 ] 00:17:44.664 }' 00:17:44.664 18:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:44.664 18:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.231 18:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:45.231 [2024-07-24 18:20:53.689697] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.231 [2024-07-24 18:20:53.689717] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdb81e0 name Existed_Raid, state configuring 00:17:45.231 18:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:45.489 [2024-07-24 18:20:53.846121] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.489 [2024-07-24 18:20:53.846141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.489 [2024-07-24 18:20:53.846147] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.489 [2024-07-24 18:20:53.846155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.489 [2024-07-24 18:20:53.846160] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:45.489 [2024-07-24 18:20:53.846168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:45.489 [2024-07-24 18:20:53.846174] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:45.489 [2024-07-24 18:20:53.846181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:45.489 18:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:45.489 [2024-07-24 18:20:54.022896] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.489 BaseBdev1 00:17:45.489 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:45.489 18:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:45.489 18:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:45.489 18:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:45.489 18:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:45.489 18:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:45.489 18:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.748 18:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:46.006 [ 00:17:46.006 { 00:17:46.006 "name": "BaseBdev1", 00:17:46.006 "aliases": [ 00:17:46.006 "0cb233a0-ef30-4501-8ec9-43eb9d3b0ee6" 00:17:46.006 ], 00:17:46.006 "product_name": "Malloc disk", 00:17:46.006 "block_size": 512, 00:17:46.006 "num_blocks": 65536, 00:17:46.006 "uuid": "0cb233a0-ef30-4501-8ec9-43eb9d3b0ee6", 00:17:46.006 "assigned_rate_limits": { 00:17:46.006 "rw_ios_per_sec": 0, 00:17:46.006 "rw_mbytes_per_sec": 0, 00:17:46.006 "r_mbytes_per_sec": 0, 00:17:46.006 "w_mbytes_per_sec": 0 00:17:46.006 }, 00:17:46.006 "claimed": true, 00:17:46.006 "claim_type": "exclusive_write", 00:17:46.006 "zoned": false, 00:17:46.006 "supported_io_types": { 00:17:46.006 "read": true, 00:17:46.006 "write": true, 00:17:46.006 "unmap": true, 00:17:46.006 "flush": true, 00:17:46.006 "reset": true, 00:17:46.006 "nvme_admin": false, 00:17:46.006 "nvme_io": false, 00:17:46.006 "nvme_io_md": false, 00:17:46.006 "write_zeroes": true, 00:17:46.006 "zcopy": true, 00:17:46.006 "get_zone_info": false, 00:17:46.006 "zone_management": false, 00:17:46.006 "zone_append": false, 00:17:46.006 "compare": false, 00:17:46.006 "compare_and_write": false, 00:17:46.006 "abort": true, 00:17:46.006 "seek_hole": false, 00:17:46.006 "seek_data": false, 00:17:46.006 "copy": true, 00:17:46.006 "nvme_iov_md": false 00:17:46.006 }, 00:17:46.006 "memory_domains": [ 00:17:46.006 { 00:17:46.006 "dma_device_id": "system", 00:17:46.006 "dma_device_type": 1 00:17:46.006 }, 00:17:46.006 { 00:17:46.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.006 "dma_device_type": 2 00:17:46.006 } 00:17:46.006 ], 00:17:46.006 "driver_specific": {} 00:17:46.006 } 00:17:46.006 ] 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.006 "name": "Existed_Raid", 00:17:46.006 "uuid": "39b19cc1-1e43-4281-9691-87a4ef16681e", 00:17:46.006 "strip_size_kb": 0, 00:17:46.006 "state": "configuring", 00:17:46.006 "raid_level": "raid1", 00:17:46.006 "superblock": true, 00:17:46.006 "num_base_bdevs": 4, 00:17:46.006 "num_base_bdevs_discovered": 1, 00:17:46.006 "num_base_bdevs_operational": 4, 00:17:46.006 "base_bdevs_list": [ 00:17:46.006 { 00:17:46.006 "name": "BaseBdev1", 00:17:46.006 "uuid": "0cb233a0-ef30-4501-8ec9-43eb9d3b0ee6", 00:17:46.006 "is_configured": true, 00:17:46.006 "data_offset": 2048, 00:17:46.006 "data_size": 63488 00:17:46.006 }, 00:17:46.006 { 00:17:46.006 "name": "BaseBdev2", 00:17:46.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.006 "is_configured": false, 00:17:46.006 "data_offset": 0, 00:17:46.006 "data_size": 0 00:17:46.006 }, 00:17:46.006 { 00:17:46.006 "name": "BaseBdev3", 00:17:46.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.006 "is_configured": false, 00:17:46.006 "data_offset": 0, 00:17:46.006 "data_size": 0 00:17:46.006 }, 00:17:46.006 { 00:17:46.006 "name": "BaseBdev4", 00:17:46.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.006 "is_configured": false, 00:17:46.006 "data_offset": 0, 00:17:46.006 "data_size": 0 00:17:46.006 } 00:17:46.006 ] 00:17:46.006 }' 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.006 18:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.572 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:46.572 [2024-07-24 18:20:55.165851] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.572 [2024-07-24 18:20:55.165881] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdb7a50 name Existed_Raid, state configuring 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:46.831 [2024-07-24 18:20:55.342333] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.831 [2024-07-24 18:20:55.343417] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.831 [2024-07-24 18:20:55.343442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.831 [2024-07-24 18:20:55.343448] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:46.831 [2024-07-24 18:20:55.343455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:46.831 [2024-07-24 18:20:55.343461] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:46.831 [2024-07-24 18:20:55.343467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.831 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.091 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.091 "name": "Existed_Raid", 00:17:47.091 "uuid": "a28d5867-f29b-4b90-a4e0-f6c7db6e3ab6", 00:17:47.091 "strip_size_kb": 0, 00:17:47.091 "state": "configuring", 00:17:47.091 "raid_level": "raid1", 00:17:47.091 "superblock": true, 00:17:47.091 "num_base_bdevs": 4, 00:17:47.091 "num_base_bdevs_discovered": 1, 00:17:47.091 "num_base_bdevs_operational": 4, 00:17:47.091 "base_bdevs_list": [ 00:17:47.091 { 00:17:47.091 "name": "BaseBdev1", 00:17:47.091 "uuid": "0cb233a0-ef30-4501-8ec9-43eb9d3b0ee6", 00:17:47.091 "is_configured": true, 00:17:47.091 "data_offset": 2048, 00:17:47.091 "data_size": 63488 00:17:47.091 }, 00:17:47.091 { 00:17:47.091 "name": "BaseBdev2", 00:17:47.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.091 "is_configured": false, 00:17:47.091 "data_offset": 0, 00:17:47.091 "data_size": 0 00:17:47.091 }, 00:17:47.091 { 00:17:47.091 "name": "BaseBdev3", 00:17:47.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.091 "is_configured": false, 00:17:47.091 "data_offset": 0, 00:17:47.091 "data_size": 0 00:17:47.091 }, 00:17:47.091 { 00:17:47.091 "name": "BaseBdev4", 00:17:47.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.091 "is_configured": false, 00:17:47.091 "data_offset": 0, 00:17:47.091 "data_size": 0 00:17:47.091 } 00:17:47.091 ] 00:17:47.091 }' 00:17:47.091 18:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.091 18:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.657 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:47.657 [2024-07-24 18:20:56.211296] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:47.657 BaseBdev2 00:17:47.657 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:47.657 18:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:47.657 18:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:47.657 18:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:47.657 18:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:47.657 18:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:47.657 18:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:47.916 18:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:48.175 [ 00:17:48.175 { 00:17:48.175 "name": "BaseBdev2", 00:17:48.175 "aliases": [ 00:17:48.175 "0244f551-e491-4138-93fe-008e5c4cd8ce" 00:17:48.175 ], 00:17:48.175 "product_name": "Malloc disk", 00:17:48.175 "block_size": 512, 00:17:48.175 "num_blocks": 65536, 00:17:48.175 "uuid": "0244f551-e491-4138-93fe-008e5c4cd8ce", 00:17:48.175 "assigned_rate_limits": { 00:17:48.175 "rw_ios_per_sec": 0, 00:17:48.175 "rw_mbytes_per_sec": 0, 00:17:48.175 "r_mbytes_per_sec": 0, 00:17:48.175 "w_mbytes_per_sec": 0 00:17:48.175 }, 00:17:48.175 "claimed": true, 00:17:48.175 "claim_type": "exclusive_write", 00:17:48.175 "zoned": false, 00:17:48.175 "supported_io_types": { 00:17:48.175 "read": true, 00:17:48.175 "write": true, 00:17:48.175 "unmap": true, 00:17:48.175 "flush": true, 00:17:48.175 "reset": true, 00:17:48.175 "nvme_admin": false, 00:17:48.175 "nvme_io": false, 00:17:48.175 "nvme_io_md": false, 00:17:48.175 "write_zeroes": true, 00:17:48.175 "zcopy": true, 00:17:48.175 "get_zone_info": false, 00:17:48.175 "zone_management": false, 00:17:48.175 "zone_append": false, 00:17:48.175 "compare": false, 00:17:48.175 "compare_and_write": false, 00:17:48.175 "abort": true, 00:17:48.175 "seek_hole": false, 00:17:48.175 "seek_data": false, 00:17:48.175 "copy": true, 00:17:48.175 "nvme_iov_md": false 00:17:48.175 }, 00:17:48.176 "memory_domains": [ 00:17:48.176 { 00:17:48.176 "dma_device_id": "system", 00:17:48.176 "dma_device_type": 1 00:17:48.176 }, 00:17:48.176 { 00:17:48.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.176 "dma_device_type": 2 00:17:48.176 } 00:17:48.176 ], 00:17:48.176 "driver_specific": {} 00:17:48.176 } 00:17:48.176 ] 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.176 "name": "Existed_Raid", 00:17:48.176 "uuid": "a28d5867-f29b-4b90-a4e0-f6c7db6e3ab6", 00:17:48.176 "strip_size_kb": 0, 00:17:48.176 "state": "configuring", 00:17:48.176 "raid_level": "raid1", 00:17:48.176 "superblock": true, 00:17:48.176 "num_base_bdevs": 4, 00:17:48.176 "num_base_bdevs_discovered": 2, 00:17:48.176 "num_base_bdevs_operational": 4, 00:17:48.176 "base_bdevs_list": [ 00:17:48.176 { 00:17:48.176 "name": "BaseBdev1", 00:17:48.176 "uuid": "0cb233a0-ef30-4501-8ec9-43eb9d3b0ee6", 00:17:48.176 "is_configured": true, 00:17:48.176 "data_offset": 2048, 00:17:48.176 "data_size": 63488 00:17:48.176 }, 00:17:48.176 { 00:17:48.176 "name": "BaseBdev2", 00:17:48.176 "uuid": "0244f551-e491-4138-93fe-008e5c4cd8ce", 00:17:48.176 "is_configured": true, 00:17:48.176 "data_offset": 2048, 00:17:48.176 "data_size": 63488 00:17:48.176 }, 00:17:48.176 { 00:17:48.176 "name": "BaseBdev3", 00:17:48.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.176 "is_configured": false, 00:17:48.176 "data_offset": 0, 00:17:48.176 "data_size": 0 00:17:48.176 }, 00:17:48.176 { 00:17:48.176 "name": "BaseBdev4", 00:17:48.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.176 "is_configured": false, 00:17:48.176 "data_offset": 0, 00:17:48.176 "data_size": 0 00:17:48.176 } 00:17:48.176 ] 00:17:48.176 }' 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.176 18:20:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.743 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:49.003 [2024-07-24 18:20:57.405214] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:49.003 BaseBdev3 00:17:49.003 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:49.003 18:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:49.003 18:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:49.003 18:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:49.003 18:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:49.003 18:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:49.003 18:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:49.003 18:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:49.262 [ 00:17:49.262 { 00:17:49.263 "name": "BaseBdev3", 00:17:49.263 "aliases": [ 00:17:49.263 "bbdc2c48-847e-4942-b122-259f4b0cba1c" 00:17:49.263 ], 00:17:49.263 "product_name": "Malloc disk", 00:17:49.263 "block_size": 512, 00:17:49.263 "num_blocks": 65536, 00:17:49.263 "uuid": "bbdc2c48-847e-4942-b122-259f4b0cba1c", 00:17:49.263 "assigned_rate_limits": { 00:17:49.263 "rw_ios_per_sec": 0, 00:17:49.263 "rw_mbytes_per_sec": 0, 00:17:49.263 "r_mbytes_per_sec": 0, 00:17:49.263 "w_mbytes_per_sec": 0 00:17:49.263 }, 00:17:49.263 "claimed": true, 00:17:49.263 "claim_type": "exclusive_write", 00:17:49.263 "zoned": false, 00:17:49.263 "supported_io_types": { 00:17:49.263 "read": true, 00:17:49.263 "write": true, 00:17:49.263 "unmap": true, 00:17:49.263 "flush": true, 00:17:49.263 "reset": true, 00:17:49.263 "nvme_admin": false, 00:17:49.263 "nvme_io": false, 00:17:49.263 "nvme_io_md": false, 00:17:49.263 "write_zeroes": true, 00:17:49.263 "zcopy": true, 00:17:49.263 "get_zone_info": false, 00:17:49.263 "zone_management": false, 00:17:49.263 "zone_append": false, 00:17:49.263 "compare": false, 00:17:49.263 "compare_and_write": false, 00:17:49.263 "abort": true, 00:17:49.263 "seek_hole": false, 00:17:49.263 "seek_data": false, 00:17:49.263 "copy": true, 00:17:49.263 "nvme_iov_md": false 00:17:49.263 }, 00:17:49.263 "memory_domains": [ 00:17:49.263 { 00:17:49.263 "dma_device_id": "system", 00:17:49.263 "dma_device_type": 1 00:17:49.263 }, 00:17:49.263 { 00:17:49.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.263 "dma_device_type": 2 00:17:49.263 } 00:17:49.263 ], 00:17:49.263 "driver_specific": {} 00:17:49.263 } 00:17:49.263 ] 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.263 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.522 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.522 "name": "Existed_Raid", 00:17:49.522 "uuid": "a28d5867-f29b-4b90-a4e0-f6c7db6e3ab6", 00:17:49.522 "strip_size_kb": 0, 00:17:49.522 "state": "configuring", 00:17:49.522 "raid_level": "raid1", 00:17:49.522 "superblock": true, 00:17:49.522 "num_base_bdevs": 4, 00:17:49.522 "num_base_bdevs_discovered": 3, 00:17:49.522 "num_base_bdevs_operational": 4, 00:17:49.522 "base_bdevs_list": [ 00:17:49.523 { 00:17:49.523 "name": "BaseBdev1", 00:17:49.523 "uuid": "0cb233a0-ef30-4501-8ec9-43eb9d3b0ee6", 00:17:49.523 "is_configured": true, 00:17:49.523 "data_offset": 2048, 00:17:49.523 "data_size": 63488 00:17:49.523 }, 00:17:49.523 { 00:17:49.523 "name": "BaseBdev2", 00:17:49.523 "uuid": "0244f551-e491-4138-93fe-008e5c4cd8ce", 00:17:49.523 "is_configured": true, 00:17:49.523 "data_offset": 2048, 00:17:49.523 "data_size": 63488 00:17:49.523 }, 00:17:49.523 { 00:17:49.523 "name": "BaseBdev3", 00:17:49.523 "uuid": "bbdc2c48-847e-4942-b122-259f4b0cba1c", 00:17:49.523 "is_configured": true, 00:17:49.523 "data_offset": 2048, 00:17:49.523 "data_size": 63488 00:17:49.523 }, 00:17:49.523 { 00:17:49.523 "name": "BaseBdev4", 00:17:49.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.523 "is_configured": false, 00:17:49.523 "data_offset": 0, 00:17:49.523 "data_size": 0 00:17:49.523 } 00:17:49.523 ] 00:17:49.523 }' 00:17:49.523 18:20:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.523 18:20:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.092 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:50.092 [2024-07-24 18:20:58.567015] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:50.092 [2024-07-24 18:20:58.567150] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xdb8ab0 00:17:50.092 [2024-07-24 18:20:58.567159] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:50.092 [2024-07-24 18:20:58.567278] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf6bcd0 00:17:50.092 [2024-07-24 18:20:58.567361] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xdb8ab0 00:17:50.092 [2024-07-24 18:20:58.567367] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xdb8ab0 00:17:50.092 [2024-07-24 18:20:58.567430] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.092 BaseBdev4 00:17:50.092 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:50.092 18:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:50.092 18:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:50.092 18:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:50.092 18:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:50.092 18:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:50.092 18:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:50.351 18:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:50.351 [ 00:17:50.351 { 00:17:50.351 "name": "BaseBdev4", 00:17:50.351 "aliases": [ 00:17:50.351 "33567ebb-7165-457b-8774-4ce9e4be1212" 00:17:50.351 ], 00:17:50.351 "product_name": "Malloc disk", 00:17:50.351 "block_size": 512, 00:17:50.351 "num_blocks": 65536, 00:17:50.351 "uuid": "33567ebb-7165-457b-8774-4ce9e4be1212", 00:17:50.351 "assigned_rate_limits": { 00:17:50.351 "rw_ios_per_sec": 0, 00:17:50.352 "rw_mbytes_per_sec": 0, 00:17:50.352 "r_mbytes_per_sec": 0, 00:17:50.352 "w_mbytes_per_sec": 0 00:17:50.352 }, 00:17:50.352 "claimed": true, 00:17:50.352 "claim_type": "exclusive_write", 00:17:50.352 "zoned": false, 00:17:50.352 "supported_io_types": { 00:17:50.352 "read": true, 00:17:50.352 "write": true, 00:17:50.352 "unmap": true, 00:17:50.352 "flush": true, 00:17:50.352 "reset": true, 00:17:50.352 "nvme_admin": false, 00:17:50.352 "nvme_io": false, 00:17:50.352 "nvme_io_md": false, 00:17:50.352 "write_zeroes": true, 00:17:50.352 "zcopy": true, 00:17:50.352 "get_zone_info": false, 00:17:50.352 "zone_management": false, 00:17:50.352 "zone_append": false, 00:17:50.352 "compare": false, 00:17:50.352 "compare_and_write": false, 00:17:50.352 "abort": true, 00:17:50.352 "seek_hole": false, 00:17:50.352 "seek_data": false, 00:17:50.352 "copy": true, 00:17:50.352 "nvme_iov_md": false 00:17:50.352 }, 00:17:50.352 "memory_domains": [ 00:17:50.352 { 00:17:50.352 "dma_device_id": "system", 00:17:50.352 "dma_device_type": 1 00:17:50.352 }, 00:17:50.352 { 00:17:50.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.352 "dma_device_type": 2 00:17:50.352 } 00:17:50.352 ], 00:17:50.352 "driver_specific": {} 00:17:50.352 } 00:17:50.352 ] 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.352 18:20:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.612 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.612 "name": "Existed_Raid", 00:17:50.612 "uuid": "a28d5867-f29b-4b90-a4e0-f6c7db6e3ab6", 00:17:50.612 "strip_size_kb": 0, 00:17:50.612 "state": "online", 00:17:50.612 "raid_level": "raid1", 00:17:50.612 "superblock": true, 00:17:50.612 "num_base_bdevs": 4, 00:17:50.612 "num_base_bdevs_discovered": 4, 00:17:50.612 "num_base_bdevs_operational": 4, 00:17:50.612 "base_bdevs_list": [ 00:17:50.612 { 00:17:50.612 "name": "BaseBdev1", 00:17:50.612 "uuid": "0cb233a0-ef30-4501-8ec9-43eb9d3b0ee6", 00:17:50.612 "is_configured": true, 00:17:50.612 "data_offset": 2048, 00:17:50.612 "data_size": 63488 00:17:50.612 }, 00:17:50.612 { 00:17:50.612 "name": "BaseBdev2", 00:17:50.612 "uuid": "0244f551-e491-4138-93fe-008e5c4cd8ce", 00:17:50.612 "is_configured": true, 00:17:50.612 "data_offset": 2048, 00:17:50.612 "data_size": 63488 00:17:50.612 }, 00:17:50.612 { 00:17:50.612 "name": "BaseBdev3", 00:17:50.612 "uuid": "bbdc2c48-847e-4942-b122-259f4b0cba1c", 00:17:50.612 "is_configured": true, 00:17:50.612 "data_offset": 2048, 00:17:50.612 "data_size": 63488 00:17:50.612 }, 00:17:50.612 { 00:17:50.612 "name": "BaseBdev4", 00:17:50.612 "uuid": "33567ebb-7165-457b-8774-4ce9e4be1212", 00:17:50.612 "is_configured": true, 00:17:50.612 "data_offset": 2048, 00:17:50.612 "data_size": 63488 00:17:50.612 } 00:17:50.612 ] 00:17:50.612 }' 00:17:50.612 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.612 18:20:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.180 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:51.180 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:51.180 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:51.180 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:51.180 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:51.180 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:51.180 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:51.180 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:51.180 [2024-07-24 18:20:59.706141] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.180 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:51.180 "name": "Existed_Raid", 00:17:51.180 "aliases": [ 00:17:51.180 "a28d5867-f29b-4b90-a4e0-f6c7db6e3ab6" 00:17:51.180 ], 00:17:51.180 "product_name": "Raid Volume", 00:17:51.180 "block_size": 512, 00:17:51.180 "num_blocks": 63488, 00:17:51.180 "uuid": "a28d5867-f29b-4b90-a4e0-f6c7db6e3ab6", 00:17:51.180 "assigned_rate_limits": { 00:17:51.180 "rw_ios_per_sec": 0, 00:17:51.180 "rw_mbytes_per_sec": 0, 00:17:51.180 "r_mbytes_per_sec": 0, 00:17:51.180 "w_mbytes_per_sec": 0 00:17:51.180 }, 00:17:51.180 "claimed": false, 00:17:51.180 "zoned": false, 00:17:51.180 "supported_io_types": { 00:17:51.180 "read": true, 00:17:51.180 "write": true, 00:17:51.180 "unmap": false, 00:17:51.180 "flush": false, 00:17:51.180 "reset": true, 00:17:51.180 "nvme_admin": false, 00:17:51.180 "nvme_io": false, 00:17:51.180 "nvme_io_md": false, 00:17:51.180 "write_zeroes": true, 00:17:51.180 "zcopy": false, 00:17:51.180 "get_zone_info": false, 00:17:51.180 "zone_management": false, 00:17:51.180 "zone_append": false, 00:17:51.180 "compare": false, 00:17:51.180 "compare_and_write": false, 00:17:51.180 "abort": false, 00:17:51.181 "seek_hole": false, 00:17:51.181 "seek_data": false, 00:17:51.181 "copy": false, 00:17:51.181 "nvme_iov_md": false 00:17:51.181 }, 00:17:51.181 "memory_domains": [ 00:17:51.181 { 00:17:51.181 "dma_device_id": "system", 00:17:51.181 "dma_device_type": 1 00:17:51.181 }, 00:17:51.181 { 00:17:51.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.181 "dma_device_type": 2 00:17:51.181 }, 00:17:51.181 { 00:17:51.181 "dma_device_id": "system", 00:17:51.181 "dma_device_type": 1 00:17:51.181 }, 00:17:51.181 { 00:17:51.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.181 "dma_device_type": 2 00:17:51.181 }, 00:17:51.181 { 00:17:51.181 "dma_device_id": "system", 00:17:51.181 "dma_device_type": 1 00:17:51.181 }, 00:17:51.181 { 00:17:51.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.181 "dma_device_type": 2 00:17:51.181 }, 00:17:51.181 { 00:17:51.181 "dma_device_id": "system", 00:17:51.181 "dma_device_type": 1 00:17:51.181 }, 00:17:51.181 { 00:17:51.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.181 "dma_device_type": 2 00:17:51.181 } 00:17:51.181 ], 00:17:51.181 "driver_specific": { 00:17:51.181 "raid": { 00:17:51.181 "uuid": "a28d5867-f29b-4b90-a4e0-f6c7db6e3ab6", 00:17:51.181 "strip_size_kb": 0, 00:17:51.181 "state": "online", 00:17:51.181 "raid_level": "raid1", 00:17:51.181 "superblock": true, 00:17:51.181 "num_base_bdevs": 4, 00:17:51.181 "num_base_bdevs_discovered": 4, 00:17:51.181 "num_base_bdevs_operational": 4, 00:17:51.181 "base_bdevs_list": [ 00:17:51.181 { 00:17:51.181 "name": "BaseBdev1", 00:17:51.181 "uuid": "0cb233a0-ef30-4501-8ec9-43eb9d3b0ee6", 00:17:51.181 "is_configured": true, 00:17:51.181 "data_offset": 2048, 00:17:51.181 "data_size": 63488 00:17:51.181 }, 00:17:51.181 { 00:17:51.181 "name": "BaseBdev2", 00:17:51.181 "uuid": "0244f551-e491-4138-93fe-008e5c4cd8ce", 00:17:51.181 "is_configured": true, 00:17:51.181 "data_offset": 2048, 00:17:51.181 "data_size": 63488 00:17:51.181 }, 00:17:51.181 { 00:17:51.181 "name": "BaseBdev3", 00:17:51.181 "uuid": "bbdc2c48-847e-4942-b122-259f4b0cba1c", 00:17:51.181 "is_configured": true, 00:17:51.181 "data_offset": 2048, 00:17:51.181 "data_size": 63488 00:17:51.181 }, 00:17:51.181 { 00:17:51.181 "name": "BaseBdev4", 00:17:51.181 "uuid": "33567ebb-7165-457b-8774-4ce9e4be1212", 00:17:51.181 "is_configured": true, 00:17:51.181 "data_offset": 2048, 00:17:51.181 "data_size": 63488 00:17:51.181 } 00:17:51.181 ] 00:17:51.181 } 00:17:51.181 } 00:17:51.181 }' 00:17:51.181 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:51.181 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:51.181 BaseBdev2 00:17:51.181 BaseBdev3 00:17:51.181 BaseBdev4' 00:17:51.181 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:51.181 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:51.181 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:51.440 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:51.440 "name": "BaseBdev1", 00:17:51.440 "aliases": [ 00:17:51.440 "0cb233a0-ef30-4501-8ec9-43eb9d3b0ee6" 00:17:51.440 ], 00:17:51.440 "product_name": "Malloc disk", 00:17:51.440 "block_size": 512, 00:17:51.440 "num_blocks": 65536, 00:17:51.440 "uuid": "0cb233a0-ef30-4501-8ec9-43eb9d3b0ee6", 00:17:51.440 "assigned_rate_limits": { 00:17:51.440 "rw_ios_per_sec": 0, 00:17:51.440 "rw_mbytes_per_sec": 0, 00:17:51.440 "r_mbytes_per_sec": 0, 00:17:51.440 "w_mbytes_per_sec": 0 00:17:51.440 }, 00:17:51.440 "claimed": true, 00:17:51.440 "claim_type": "exclusive_write", 00:17:51.440 "zoned": false, 00:17:51.440 "supported_io_types": { 00:17:51.440 "read": true, 00:17:51.440 "write": true, 00:17:51.440 "unmap": true, 00:17:51.440 "flush": true, 00:17:51.440 "reset": true, 00:17:51.440 "nvme_admin": false, 00:17:51.440 "nvme_io": false, 00:17:51.440 "nvme_io_md": false, 00:17:51.440 "write_zeroes": true, 00:17:51.440 "zcopy": true, 00:17:51.440 "get_zone_info": false, 00:17:51.440 "zone_management": false, 00:17:51.440 "zone_append": false, 00:17:51.440 "compare": false, 00:17:51.440 "compare_and_write": false, 00:17:51.440 "abort": true, 00:17:51.440 "seek_hole": false, 00:17:51.440 "seek_data": false, 00:17:51.440 "copy": true, 00:17:51.440 "nvme_iov_md": false 00:17:51.440 }, 00:17:51.440 "memory_domains": [ 00:17:51.440 { 00:17:51.440 "dma_device_id": "system", 00:17:51.440 "dma_device_type": 1 00:17:51.440 }, 00:17:51.440 { 00:17:51.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.440 "dma_device_type": 2 00:17:51.440 } 00:17:51.440 ], 00:17:51.440 "driver_specific": {} 00:17:51.440 }' 00:17:51.440 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:51.440 18:20:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:51.440 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:51.440 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:51.699 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:51.699 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:51.699 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:51.699 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:51.699 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:51.699 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:51.699 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:51.699 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:51.699 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:51.699 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:51.699 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:51.958 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:51.958 "name": "BaseBdev2", 00:17:51.958 "aliases": [ 00:17:51.958 "0244f551-e491-4138-93fe-008e5c4cd8ce" 00:17:51.958 ], 00:17:51.958 "product_name": "Malloc disk", 00:17:51.958 "block_size": 512, 00:17:51.958 "num_blocks": 65536, 00:17:51.958 "uuid": "0244f551-e491-4138-93fe-008e5c4cd8ce", 00:17:51.958 "assigned_rate_limits": { 00:17:51.958 "rw_ios_per_sec": 0, 00:17:51.958 "rw_mbytes_per_sec": 0, 00:17:51.958 "r_mbytes_per_sec": 0, 00:17:51.958 "w_mbytes_per_sec": 0 00:17:51.958 }, 00:17:51.958 "claimed": true, 00:17:51.958 "claim_type": "exclusive_write", 00:17:51.958 "zoned": false, 00:17:51.958 "supported_io_types": { 00:17:51.958 "read": true, 00:17:51.958 "write": true, 00:17:51.958 "unmap": true, 00:17:51.958 "flush": true, 00:17:51.958 "reset": true, 00:17:51.958 "nvme_admin": false, 00:17:51.958 "nvme_io": false, 00:17:51.958 "nvme_io_md": false, 00:17:51.958 "write_zeroes": true, 00:17:51.958 "zcopy": true, 00:17:51.958 "get_zone_info": false, 00:17:51.958 "zone_management": false, 00:17:51.958 "zone_append": false, 00:17:51.959 "compare": false, 00:17:51.959 "compare_and_write": false, 00:17:51.959 "abort": true, 00:17:51.959 "seek_hole": false, 00:17:51.959 "seek_data": false, 00:17:51.959 "copy": true, 00:17:51.959 "nvme_iov_md": false 00:17:51.959 }, 00:17:51.959 "memory_domains": [ 00:17:51.959 { 00:17:51.959 "dma_device_id": "system", 00:17:51.959 "dma_device_type": 1 00:17:51.959 }, 00:17:51.959 { 00:17:51.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.959 "dma_device_type": 2 00:17:51.959 } 00:17:51.959 ], 00:17:51.959 "driver_specific": {} 00:17:51.959 }' 00:17:51.959 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:51.959 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:51.959 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:51.959 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:51.959 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:52.218 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:52.218 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:52.218 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:52.218 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:52.218 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:52.218 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:52.218 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:52.218 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:52.218 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:52.218 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:52.517 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:52.517 "name": "BaseBdev3", 00:17:52.517 "aliases": [ 00:17:52.517 "bbdc2c48-847e-4942-b122-259f4b0cba1c" 00:17:52.517 ], 00:17:52.517 "product_name": "Malloc disk", 00:17:52.517 "block_size": 512, 00:17:52.517 "num_blocks": 65536, 00:17:52.517 "uuid": "bbdc2c48-847e-4942-b122-259f4b0cba1c", 00:17:52.517 "assigned_rate_limits": { 00:17:52.517 "rw_ios_per_sec": 0, 00:17:52.517 "rw_mbytes_per_sec": 0, 00:17:52.517 "r_mbytes_per_sec": 0, 00:17:52.517 "w_mbytes_per_sec": 0 00:17:52.517 }, 00:17:52.517 "claimed": true, 00:17:52.517 "claim_type": "exclusive_write", 00:17:52.517 "zoned": false, 00:17:52.517 "supported_io_types": { 00:17:52.517 "read": true, 00:17:52.517 "write": true, 00:17:52.517 "unmap": true, 00:17:52.517 "flush": true, 00:17:52.517 "reset": true, 00:17:52.517 "nvme_admin": false, 00:17:52.517 "nvme_io": false, 00:17:52.517 "nvme_io_md": false, 00:17:52.517 "write_zeroes": true, 00:17:52.517 "zcopy": true, 00:17:52.517 "get_zone_info": false, 00:17:52.517 "zone_management": false, 00:17:52.517 "zone_append": false, 00:17:52.517 "compare": false, 00:17:52.517 "compare_and_write": false, 00:17:52.517 "abort": true, 00:17:52.517 "seek_hole": false, 00:17:52.517 "seek_data": false, 00:17:52.517 "copy": true, 00:17:52.517 "nvme_iov_md": false 00:17:52.517 }, 00:17:52.517 "memory_domains": [ 00:17:52.517 { 00:17:52.517 "dma_device_id": "system", 00:17:52.517 "dma_device_type": 1 00:17:52.517 }, 00:17:52.517 { 00:17:52.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.517 "dma_device_type": 2 00:17:52.517 } 00:17:52.517 ], 00:17:52.517 "driver_specific": {} 00:17:52.517 }' 00:17:52.517 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:52.517 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:52.517 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:52.517 18:21:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:52.517 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:52.517 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:52.517 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:52.793 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:52.793 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:52.793 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:52.793 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:52.793 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:52.793 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:52.793 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:52.793 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:52.793 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:52.793 "name": "BaseBdev4", 00:17:52.793 "aliases": [ 00:17:52.793 "33567ebb-7165-457b-8774-4ce9e4be1212" 00:17:52.793 ], 00:17:52.793 "product_name": "Malloc disk", 00:17:52.793 "block_size": 512, 00:17:52.793 "num_blocks": 65536, 00:17:52.793 "uuid": "33567ebb-7165-457b-8774-4ce9e4be1212", 00:17:52.793 "assigned_rate_limits": { 00:17:52.793 "rw_ios_per_sec": 0, 00:17:52.793 "rw_mbytes_per_sec": 0, 00:17:52.793 "r_mbytes_per_sec": 0, 00:17:52.793 "w_mbytes_per_sec": 0 00:17:52.793 }, 00:17:52.793 "claimed": true, 00:17:52.793 "claim_type": "exclusive_write", 00:17:52.793 "zoned": false, 00:17:52.793 "supported_io_types": { 00:17:52.793 "read": true, 00:17:52.793 "write": true, 00:17:52.793 "unmap": true, 00:17:52.793 "flush": true, 00:17:52.793 "reset": true, 00:17:52.793 "nvme_admin": false, 00:17:52.793 "nvme_io": false, 00:17:52.793 "nvme_io_md": false, 00:17:52.793 "write_zeroes": true, 00:17:52.793 "zcopy": true, 00:17:52.793 "get_zone_info": false, 00:17:52.793 "zone_management": false, 00:17:52.794 "zone_append": false, 00:17:52.794 "compare": false, 00:17:52.794 "compare_and_write": false, 00:17:52.794 "abort": true, 00:17:52.794 "seek_hole": false, 00:17:52.794 "seek_data": false, 00:17:52.794 "copy": true, 00:17:52.794 "nvme_iov_md": false 00:17:52.794 }, 00:17:52.794 "memory_domains": [ 00:17:52.794 { 00:17:52.794 "dma_device_id": "system", 00:17:52.794 "dma_device_type": 1 00:17:52.794 }, 00:17:52.794 { 00:17:52.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.794 "dma_device_type": 2 00:17:52.794 } 00:17:52.794 ], 00:17:52.794 "driver_specific": {} 00:17:52.794 }' 00:17:52.794 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.053 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.053 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:53.053 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.053 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.053 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:53.053 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.053 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.053 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:53.053 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.053 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.312 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:53.312 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:53.312 [2024-07-24 18:21:01.843484] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:53.312 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.313 18:21:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.572 18:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.572 "name": "Existed_Raid", 00:17:53.572 "uuid": "a28d5867-f29b-4b90-a4e0-f6c7db6e3ab6", 00:17:53.573 "strip_size_kb": 0, 00:17:53.573 "state": "online", 00:17:53.573 "raid_level": "raid1", 00:17:53.573 "superblock": true, 00:17:53.573 "num_base_bdevs": 4, 00:17:53.573 "num_base_bdevs_discovered": 3, 00:17:53.573 "num_base_bdevs_operational": 3, 00:17:53.573 "base_bdevs_list": [ 00:17:53.573 { 00:17:53.573 "name": null, 00:17:53.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.573 "is_configured": false, 00:17:53.573 "data_offset": 2048, 00:17:53.573 "data_size": 63488 00:17:53.573 }, 00:17:53.573 { 00:17:53.573 "name": "BaseBdev2", 00:17:53.573 "uuid": "0244f551-e491-4138-93fe-008e5c4cd8ce", 00:17:53.573 "is_configured": true, 00:17:53.573 "data_offset": 2048, 00:17:53.573 "data_size": 63488 00:17:53.573 }, 00:17:53.573 { 00:17:53.573 "name": "BaseBdev3", 00:17:53.573 "uuid": "bbdc2c48-847e-4942-b122-259f4b0cba1c", 00:17:53.573 "is_configured": true, 00:17:53.573 "data_offset": 2048, 00:17:53.573 "data_size": 63488 00:17:53.573 }, 00:17:53.573 { 00:17:53.573 "name": "BaseBdev4", 00:17:53.573 "uuid": "33567ebb-7165-457b-8774-4ce9e4be1212", 00:17:53.573 "is_configured": true, 00:17:53.573 "data_offset": 2048, 00:17:53.573 "data_size": 63488 00:17:53.573 } 00:17:53.573 ] 00:17:53.573 }' 00:17:53.573 18:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.573 18:21:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.142 18:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:54.142 18:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:54.142 18:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:54.142 18:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.142 18:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:54.142 18:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:54.142 18:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:54.402 [2024-07-24 18:21:02.863065] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:54.402 18:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:54.402 18:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:54.402 18:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.402 18:21:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:54.661 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:54.661 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:54.661 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:54.661 [2024-07-24 18:21:03.213708] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:54.661 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:54.661 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:54.661 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.661 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:54.921 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:54.921 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:54.921 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:55.181 [2024-07-24 18:21:03.568165] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:55.181 [2024-07-24 18:21:03.568227] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.181 [2024-07-24 18:21:03.577943] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.181 [2024-07-24 18:21:03.577983] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.181 [2024-07-24 18:21:03.577991] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdb8ab0 name Existed_Raid, state offline 00:17:55.181 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:55.181 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:55.181 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:55.181 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.181 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:55.181 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:55.181 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:55.181 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:55.181 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:55.181 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:55.440 BaseBdev2 00:17:55.440 18:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:55.440 18:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:55.440 18:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:55.440 18:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:55.440 18:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:55.440 18:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:55.440 18:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:55.699 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:55.699 [ 00:17:55.699 { 00:17:55.699 "name": "BaseBdev2", 00:17:55.699 "aliases": [ 00:17:55.699 "15dc4477-fb06-4c1d-8c3b-46d752b8b702" 00:17:55.699 ], 00:17:55.699 "product_name": "Malloc disk", 00:17:55.699 "block_size": 512, 00:17:55.699 "num_blocks": 65536, 00:17:55.699 "uuid": "15dc4477-fb06-4c1d-8c3b-46d752b8b702", 00:17:55.699 "assigned_rate_limits": { 00:17:55.699 "rw_ios_per_sec": 0, 00:17:55.699 "rw_mbytes_per_sec": 0, 00:17:55.699 "r_mbytes_per_sec": 0, 00:17:55.699 "w_mbytes_per_sec": 0 00:17:55.699 }, 00:17:55.699 "claimed": false, 00:17:55.699 "zoned": false, 00:17:55.699 "supported_io_types": { 00:17:55.699 "read": true, 00:17:55.699 "write": true, 00:17:55.699 "unmap": true, 00:17:55.699 "flush": true, 00:17:55.699 "reset": true, 00:17:55.699 "nvme_admin": false, 00:17:55.700 "nvme_io": false, 00:17:55.700 "nvme_io_md": false, 00:17:55.700 "write_zeroes": true, 00:17:55.700 "zcopy": true, 00:17:55.700 "get_zone_info": false, 00:17:55.700 "zone_management": false, 00:17:55.700 "zone_append": false, 00:17:55.700 "compare": false, 00:17:55.700 "compare_and_write": false, 00:17:55.700 "abort": true, 00:17:55.700 "seek_hole": false, 00:17:55.700 "seek_data": false, 00:17:55.700 "copy": true, 00:17:55.700 "nvme_iov_md": false 00:17:55.700 }, 00:17:55.700 "memory_domains": [ 00:17:55.700 { 00:17:55.700 "dma_device_id": "system", 00:17:55.700 "dma_device_type": 1 00:17:55.700 }, 00:17:55.700 { 00:17:55.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.700 "dma_device_type": 2 00:17:55.700 } 00:17:55.700 ], 00:17:55.700 "driver_specific": {} 00:17:55.700 } 00:17:55.700 ] 00:17:55.700 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:55.700 18:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:55.700 18:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:55.700 18:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:55.959 BaseBdev3 00:17:55.959 18:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:55.959 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:55.959 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:55.959 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:55.959 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:55.959 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:55.959 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.218 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:56.218 [ 00:17:56.218 { 00:17:56.218 "name": "BaseBdev3", 00:17:56.218 "aliases": [ 00:17:56.218 "580cc9ad-4952-4ef6-b32b-32b0e3506cf5" 00:17:56.218 ], 00:17:56.218 "product_name": "Malloc disk", 00:17:56.218 "block_size": 512, 00:17:56.218 "num_blocks": 65536, 00:17:56.218 "uuid": "580cc9ad-4952-4ef6-b32b-32b0e3506cf5", 00:17:56.218 "assigned_rate_limits": { 00:17:56.218 "rw_ios_per_sec": 0, 00:17:56.218 "rw_mbytes_per_sec": 0, 00:17:56.218 "r_mbytes_per_sec": 0, 00:17:56.218 "w_mbytes_per_sec": 0 00:17:56.218 }, 00:17:56.218 "claimed": false, 00:17:56.218 "zoned": false, 00:17:56.218 "supported_io_types": { 00:17:56.218 "read": true, 00:17:56.218 "write": true, 00:17:56.218 "unmap": true, 00:17:56.218 "flush": true, 00:17:56.218 "reset": true, 00:17:56.218 "nvme_admin": false, 00:17:56.218 "nvme_io": false, 00:17:56.218 "nvme_io_md": false, 00:17:56.218 "write_zeroes": true, 00:17:56.218 "zcopy": true, 00:17:56.218 "get_zone_info": false, 00:17:56.218 "zone_management": false, 00:17:56.218 "zone_append": false, 00:17:56.218 "compare": false, 00:17:56.218 "compare_and_write": false, 00:17:56.218 "abort": true, 00:17:56.218 "seek_hole": false, 00:17:56.218 "seek_data": false, 00:17:56.218 "copy": true, 00:17:56.218 "nvme_iov_md": false 00:17:56.218 }, 00:17:56.218 "memory_domains": [ 00:17:56.218 { 00:17:56.218 "dma_device_id": "system", 00:17:56.218 "dma_device_type": 1 00:17:56.218 }, 00:17:56.218 { 00:17:56.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.218 "dma_device_type": 2 00:17:56.218 } 00:17:56.218 ], 00:17:56.218 "driver_specific": {} 00:17:56.218 } 00:17:56.218 ] 00:17:56.218 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:56.218 18:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:56.218 18:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:56.218 18:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:56.478 BaseBdev4 00:17:56.478 18:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:56.478 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:56.478 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:56.478 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:56.478 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:56.478 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:56.478 18:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.737 18:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:56.737 [ 00:17:56.737 { 00:17:56.737 "name": "BaseBdev4", 00:17:56.737 "aliases": [ 00:17:56.737 "43d8465d-84e1-4b66-adb0-11fd0c091fb1" 00:17:56.737 ], 00:17:56.737 "product_name": "Malloc disk", 00:17:56.737 "block_size": 512, 00:17:56.737 "num_blocks": 65536, 00:17:56.737 "uuid": "43d8465d-84e1-4b66-adb0-11fd0c091fb1", 00:17:56.737 "assigned_rate_limits": { 00:17:56.737 "rw_ios_per_sec": 0, 00:17:56.737 "rw_mbytes_per_sec": 0, 00:17:56.737 "r_mbytes_per_sec": 0, 00:17:56.737 "w_mbytes_per_sec": 0 00:17:56.737 }, 00:17:56.737 "claimed": false, 00:17:56.737 "zoned": false, 00:17:56.737 "supported_io_types": { 00:17:56.737 "read": true, 00:17:56.737 "write": true, 00:17:56.737 "unmap": true, 00:17:56.737 "flush": true, 00:17:56.737 "reset": true, 00:17:56.737 "nvme_admin": false, 00:17:56.737 "nvme_io": false, 00:17:56.737 "nvme_io_md": false, 00:17:56.737 "write_zeroes": true, 00:17:56.737 "zcopy": true, 00:17:56.737 "get_zone_info": false, 00:17:56.737 "zone_management": false, 00:17:56.737 "zone_append": false, 00:17:56.737 "compare": false, 00:17:56.737 "compare_and_write": false, 00:17:56.737 "abort": true, 00:17:56.737 "seek_hole": false, 00:17:56.737 "seek_data": false, 00:17:56.737 "copy": true, 00:17:56.737 "nvme_iov_md": false 00:17:56.737 }, 00:17:56.737 "memory_domains": [ 00:17:56.737 { 00:17:56.737 "dma_device_id": "system", 00:17:56.737 "dma_device_type": 1 00:17:56.737 }, 00:17:56.737 { 00:17:56.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.737 "dma_device_type": 2 00:17:56.737 } 00:17:56.737 ], 00:17:56.737 "driver_specific": {} 00:17:56.737 } 00:17:56.737 ] 00:17:56.737 18:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:56.737 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:56.737 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:56.737 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:56.997 [2024-07-24 18:21:05.414135] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:56.997 [2024-07-24 18:21:05.414164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:56.997 [2024-07-24 18:21:05.414180] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.997 [2024-07-24 18:21:05.415038] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:56.997 [2024-07-24 18:21:05.415066] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:56.997 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:56.997 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:56.997 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:56.997 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:56.997 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:56.997 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:56.997 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.997 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.997 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.997 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.997 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.997 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.256 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.256 "name": "Existed_Raid", 00:17:57.256 "uuid": "96e01449-980d-41f9-87ed-5f2222434474", 00:17:57.256 "strip_size_kb": 0, 00:17:57.256 "state": "configuring", 00:17:57.256 "raid_level": "raid1", 00:17:57.256 "superblock": true, 00:17:57.256 "num_base_bdevs": 4, 00:17:57.256 "num_base_bdevs_discovered": 3, 00:17:57.256 "num_base_bdevs_operational": 4, 00:17:57.256 "base_bdevs_list": [ 00:17:57.256 { 00:17:57.256 "name": "BaseBdev1", 00:17:57.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.256 "is_configured": false, 00:17:57.256 "data_offset": 0, 00:17:57.256 "data_size": 0 00:17:57.257 }, 00:17:57.257 { 00:17:57.257 "name": "BaseBdev2", 00:17:57.257 "uuid": "15dc4477-fb06-4c1d-8c3b-46d752b8b702", 00:17:57.257 "is_configured": true, 00:17:57.257 "data_offset": 2048, 00:17:57.257 "data_size": 63488 00:17:57.257 }, 00:17:57.257 { 00:17:57.257 "name": "BaseBdev3", 00:17:57.257 "uuid": "580cc9ad-4952-4ef6-b32b-32b0e3506cf5", 00:17:57.257 "is_configured": true, 00:17:57.257 "data_offset": 2048, 00:17:57.257 "data_size": 63488 00:17:57.257 }, 00:17:57.257 { 00:17:57.257 "name": "BaseBdev4", 00:17:57.257 "uuid": "43d8465d-84e1-4b66-adb0-11fd0c091fb1", 00:17:57.257 "is_configured": true, 00:17:57.257 "data_offset": 2048, 00:17:57.257 "data_size": 63488 00:17:57.257 } 00:17:57.257 ] 00:17:57.257 }' 00:17:57.257 18:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.257 18:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.516 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:57.776 [2024-07-24 18:21:06.252301] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:57.776 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:57.776 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.776 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:57.776 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:57.776 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:57.776 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:57.776 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.776 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.776 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.776 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.776 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.776 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.035 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.035 "name": "Existed_Raid", 00:17:58.035 "uuid": "96e01449-980d-41f9-87ed-5f2222434474", 00:17:58.035 "strip_size_kb": 0, 00:17:58.035 "state": "configuring", 00:17:58.035 "raid_level": "raid1", 00:17:58.035 "superblock": true, 00:17:58.035 "num_base_bdevs": 4, 00:17:58.035 "num_base_bdevs_discovered": 2, 00:17:58.035 "num_base_bdevs_operational": 4, 00:17:58.035 "base_bdevs_list": [ 00:17:58.035 { 00:17:58.035 "name": "BaseBdev1", 00:17:58.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.035 "is_configured": false, 00:17:58.035 "data_offset": 0, 00:17:58.035 "data_size": 0 00:17:58.035 }, 00:17:58.035 { 00:17:58.035 "name": null, 00:17:58.035 "uuid": "15dc4477-fb06-4c1d-8c3b-46d752b8b702", 00:17:58.035 "is_configured": false, 00:17:58.035 "data_offset": 2048, 00:17:58.035 "data_size": 63488 00:17:58.035 }, 00:17:58.035 { 00:17:58.035 "name": "BaseBdev3", 00:17:58.035 "uuid": "580cc9ad-4952-4ef6-b32b-32b0e3506cf5", 00:17:58.035 "is_configured": true, 00:17:58.035 "data_offset": 2048, 00:17:58.035 "data_size": 63488 00:17:58.035 }, 00:17:58.035 { 00:17:58.035 "name": "BaseBdev4", 00:17:58.035 "uuid": "43d8465d-84e1-4b66-adb0-11fd0c091fb1", 00:17:58.035 "is_configured": true, 00:17:58.035 "data_offset": 2048, 00:17:58.035 "data_size": 63488 00:17:58.035 } 00:17:58.035 ] 00:17:58.035 }' 00:17:58.035 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.035 18:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.603 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.603 18:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:58.603 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:58.603 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:58.862 [2024-07-24 18:21:07.289692] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:58.862 BaseBdev1 00:17:58.863 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:58.863 18:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:58.863 18:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:58.863 18:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:58.863 18:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:58.863 18:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:58.863 18:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:59.122 [ 00:17:59.122 { 00:17:59.122 "name": "BaseBdev1", 00:17:59.122 "aliases": [ 00:17:59.122 "527eaa92-039f-4f73-93fc-65b7864139db" 00:17:59.122 ], 00:17:59.122 "product_name": "Malloc disk", 00:17:59.122 "block_size": 512, 00:17:59.122 "num_blocks": 65536, 00:17:59.122 "uuid": "527eaa92-039f-4f73-93fc-65b7864139db", 00:17:59.122 "assigned_rate_limits": { 00:17:59.122 "rw_ios_per_sec": 0, 00:17:59.122 "rw_mbytes_per_sec": 0, 00:17:59.122 "r_mbytes_per_sec": 0, 00:17:59.122 "w_mbytes_per_sec": 0 00:17:59.122 }, 00:17:59.122 "claimed": true, 00:17:59.122 "claim_type": "exclusive_write", 00:17:59.122 "zoned": false, 00:17:59.122 "supported_io_types": { 00:17:59.122 "read": true, 00:17:59.122 "write": true, 00:17:59.122 "unmap": true, 00:17:59.122 "flush": true, 00:17:59.122 "reset": true, 00:17:59.122 "nvme_admin": false, 00:17:59.122 "nvme_io": false, 00:17:59.122 "nvme_io_md": false, 00:17:59.122 "write_zeroes": true, 00:17:59.122 "zcopy": true, 00:17:59.122 "get_zone_info": false, 00:17:59.122 "zone_management": false, 00:17:59.122 "zone_append": false, 00:17:59.122 "compare": false, 00:17:59.122 "compare_and_write": false, 00:17:59.122 "abort": true, 00:17:59.122 "seek_hole": false, 00:17:59.122 "seek_data": false, 00:17:59.122 "copy": true, 00:17:59.122 "nvme_iov_md": false 00:17:59.122 }, 00:17:59.122 "memory_domains": [ 00:17:59.122 { 00:17:59.122 "dma_device_id": "system", 00:17:59.122 "dma_device_type": 1 00:17:59.122 }, 00:17:59.122 { 00:17:59.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.122 "dma_device_type": 2 00:17:59.122 } 00:17:59.122 ], 00:17:59.122 "driver_specific": {} 00:17:59.122 } 00:17:59.122 ] 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.122 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.382 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.382 "name": "Existed_Raid", 00:17:59.382 "uuid": "96e01449-980d-41f9-87ed-5f2222434474", 00:17:59.382 "strip_size_kb": 0, 00:17:59.382 "state": "configuring", 00:17:59.382 "raid_level": "raid1", 00:17:59.382 "superblock": true, 00:17:59.382 "num_base_bdevs": 4, 00:17:59.382 "num_base_bdevs_discovered": 3, 00:17:59.382 "num_base_bdevs_operational": 4, 00:17:59.382 "base_bdevs_list": [ 00:17:59.382 { 00:17:59.382 "name": "BaseBdev1", 00:17:59.382 "uuid": "527eaa92-039f-4f73-93fc-65b7864139db", 00:17:59.382 "is_configured": true, 00:17:59.382 "data_offset": 2048, 00:17:59.382 "data_size": 63488 00:17:59.382 }, 00:17:59.382 { 00:17:59.382 "name": null, 00:17:59.382 "uuid": "15dc4477-fb06-4c1d-8c3b-46d752b8b702", 00:17:59.382 "is_configured": false, 00:17:59.382 "data_offset": 2048, 00:17:59.382 "data_size": 63488 00:17:59.382 }, 00:17:59.382 { 00:17:59.382 "name": "BaseBdev3", 00:17:59.382 "uuid": "580cc9ad-4952-4ef6-b32b-32b0e3506cf5", 00:17:59.382 "is_configured": true, 00:17:59.382 "data_offset": 2048, 00:17:59.382 "data_size": 63488 00:17:59.382 }, 00:17:59.382 { 00:17:59.382 "name": "BaseBdev4", 00:17:59.382 "uuid": "43d8465d-84e1-4b66-adb0-11fd0c091fb1", 00:17:59.382 "is_configured": true, 00:17:59.382 "data_offset": 2048, 00:17:59.382 "data_size": 63488 00:17:59.382 } 00:17:59.382 ] 00:17:59.382 }' 00:17:59.382 18:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.382 18:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.950 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.950 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:59.950 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:59.950 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:00.210 [2024-07-24 18:21:08.621144] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.210 "name": "Existed_Raid", 00:18:00.210 "uuid": "96e01449-980d-41f9-87ed-5f2222434474", 00:18:00.210 "strip_size_kb": 0, 00:18:00.210 "state": "configuring", 00:18:00.210 "raid_level": "raid1", 00:18:00.210 "superblock": true, 00:18:00.210 "num_base_bdevs": 4, 00:18:00.210 "num_base_bdevs_discovered": 2, 00:18:00.210 "num_base_bdevs_operational": 4, 00:18:00.210 "base_bdevs_list": [ 00:18:00.210 { 00:18:00.210 "name": "BaseBdev1", 00:18:00.210 "uuid": "527eaa92-039f-4f73-93fc-65b7864139db", 00:18:00.210 "is_configured": true, 00:18:00.210 "data_offset": 2048, 00:18:00.210 "data_size": 63488 00:18:00.210 }, 00:18:00.210 { 00:18:00.210 "name": null, 00:18:00.210 "uuid": "15dc4477-fb06-4c1d-8c3b-46d752b8b702", 00:18:00.210 "is_configured": false, 00:18:00.210 "data_offset": 2048, 00:18:00.210 "data_size": 63488 00:18:00.210 }, 00:18:00.210 { 00:18:00.210 "name": null, 00:18:00.210 "uuid": "580cc9ad-4952-4ef6-b32b-32b0e3506cf5", 00:18:00.210 "is_configured": false, 00:18:00.210 "data_offset": 2048, 00:18:00.210 "data_size": 63488 00:18:00.210 }, 00:18:00.210 { 00:18:00.210 "name": "BaseBdev4", 00:18:00.210 "uuid": "43d8465d-84e1-4b66-adb0-11fd0c091fb1", 00:18:00.210 "is_configured": true, 00:18:00.210 "data_offset": 2048, 00:18:00.210 "data_size": 63488 00:18:00.210 } 00:18:00.210 ] 00:18:00.210 }' 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.210 18:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.777 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:00.777 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.035 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:01.035 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:01.035 [2024-07-24 18:21:09.615731] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.294 "name": "Existed_Raid", 00:18:01.294 "uuid": "96e01449-980d-41f9-87ed-5f2222434474", 00:18:01.294 "strip_size_kb": 0, 00:18:01.294 "state": "configuring", 00:18:01.294 "raid_level": "raid1", 00:18:01.294 "superblock": true, 00:18:01.294 "num_base_bdevs": 4, 00:18:01.294 "num_base_bdevs_discovered": 3, 00:18:01.294 "num_base_bdevs_operational": 4, 00:18:01.294 "base_bdevs_list": [ 00:18:01.294 { 00:18:01.294 "name": "BaseBdev1", 00:18:01.294 "uuid": "527eaa92-039f-4f73-93fc-65b7864139db", 00:18:01.294 "is_configured": true, 00:18:01.294 "data_offset": 2048, 00:18:01.294 "data_size": 63488 00:18:01.294 }, 00:18:01.294 { 00:18:01.294 "name": null, 00:18:01.294 "uuid": "15dc4477-fb06-4c1d-8c3b-46d752b8b702", 00:18:01.294 "is_configured": false, 00:18:01.294 "data_offset": 2048, 00:18:01.294 "data_size": 63488 00:18:01.294 }, 00:18:01.294 { 00:18:01.294 "name": "BaseBdev3", 00:18:01.294 "uuid": "580cc9ad-4952-4ef6-b32b-32b0e3506cf5", 00:18:01.294 "is_configured": true, 00:18:01.294 "data_offset": 2048, 00:18:01.294 "data_size": 63488 00:18:01.294 }, 00:18:01.294 { 00:18:01.294 "name": "BaseBdev4", 00:18:01.294 "uuid": "43d8465d-84e1-4b66-adb0-11fd0c091fb1", 00:18:01.294 "is_configured": true, 00:18:01.294 "data_offset": 2048, 00:18:01.294 "data_size": 63488 00:18:01.294 } 00:18:01.294 ] 00:18:01.294 }' 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.294 18:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.862 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.862 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:02.121 [2024-07-24 18:21:10.634360] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.121 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.381 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:02.381 "name": "Existed_Raid", 00:18:02.381 "uuid": "96e01449-980d-41f9-87ed-5f2222434474", 00:18:02.381 "strip_size_kb": 0, 00:18:02.381 "state": "configuring", 00:18:02.381 "raid_level": "raid1", 00:18:02.381 "superblock": true, 00:18:02.381 "num_base_bdevs": 4, 00:18:02.381 "num_base_bdevs_discovered": 2, 00:18:02.381 "num_base_bdevs_operational": 4, 00:18:02.381 "base_bdevs_list": [ 00:18:02.381 { 00:18:02.381 "name": null, 00:18:02.381 "uuid": "527eaa92-039f-4f73-93fc-65b7864139db", 00:18:02.381 "is_configured": false, 00:18:02.381 "data_offset": 2048, 00:18:02.381 "data_size": 63488 00:18:02.381 }, 00:18:02.381 { 00:18:02.381 "name": null, 00:18:02.381 "uuid": "15dc4477-fb06-4c1d-8c3b-46d752b8b702", 00:18:02.381 "is_configured": false, 00:18:02.381 "data_offset": 2048, 00:18:02.381 "data_size": 63488 00:18:02.381 }, 00:18:02.381 { 00:18:02.381 "name": "BaseBdev3", 00:18:02.381 "uuid": "580cc9ad-4952-4ef6-b32b-32b0e3506cf5", 00:18:02.381 "is_configured": true, 00:18:02.381 "data_offset": 2048, 00:18:02.381 "data_size": 63488 00:18:02.381 }, 00:18:02.381 { 00:18:02.381 "name": "BaseBdev4", 00:18:02.381 "uuid": "43d8465d-84e1-4b66-adb0-11fd0c091fb1", 00:18:02.381 "is_configured": true, 00:18:02.381 "data_offset": 2048, 00:18:02.381 "data_size": 63488 00:18:02.381 } 00:18:02.381 ] 00:18:02.381 }' 00:18:02.381 18:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:02.381 18:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.950 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.950 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:02.950 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:02.950 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:03.210 [2024-07-24 18:21:11.646593] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:03.210 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:03.210 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:03.210 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:03.210 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:03.210 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:03.210 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:03.210 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.210 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.210 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.210 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.210 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.210 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.469 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.469 "name": "Existed_Raid", 00:18:03.469 "uuid": "96e01449-980d-41f9-87ed-5f2222434474", 00:18:03.469 "strip_size_kb": 0, 00:18:03.469 "state": "configuring", 00:18:03.469 "raid_level": "raid1", 00:18:03.469 "superblock": true, 00:18:03.469 "num_base_bdevs": 4, 00:18:03.469 "num_base_bdevs_discovered": 3, 00:18:03.469 "num_base_bdevs_operational": 4, 00:18:03.469 "base_bdevs_list": [ 00:18:03.469 { 00:18:03.469 "name": null, 00:18:03.469 "uuid": "527eaa92-039f-4f73-93fc-65b7864139db", 00:18:03.469 "is_configured": false, 00:18:03.469 "data_offset": 2048, 00:18:03.469 "data_size": 63488 00:18:03.469 }, 00:18:03.469 { 00:18:03.469 "name": "BaseBdev2", 00:18:03.469 "uuid": "15dc4477-fb06-4c1d-8c3b-46d752b8b702", 00:18:03.469 "is_configured": true, 00:18:03.469 "data_offset": 2048, 00:18:03.469 "data_size": 63488 00:18:03.469 }, 00:18:03.469 { 00:18:03.469 "name": "BaseBdev3", 00:18:03.469 "uuid": "580cc9ad-4952-4ef6-b32b-32b0e3506cf5", 00:18:03.469 "is_configured": true, 00:18:03.469 "data_offset": 2048, 00:18:03.469 "data_size": 63488 00:18:03.469 }, 00:18:03.469 { 00:18:03.469 "name": "BaseBdev4", 00:18:03.469 "uuid": "43d8465d-84e1-4b66-adb0-11fd0c091fb1", 00:18:03.469 "is_configured": true, 00:18:03.469 "data_offset": 2048, 00:18:03.469 "data_size": 63488 00:18:03.469 } 00:18:03.469 ] 00:18:03.469 }' 00:18:03.469 18:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.469 18:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.729 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:03.988 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.988 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:03.988 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.988 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:04.247 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 527eaa92-039f-4f73-93fc-65b7864139db 00:18:04.247 [2024-07-24 18:21:12.836346] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:04.247 [2024-07-24 18:21:12.836471] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xdb01c0 00:18:04.247 [2024-07-24 18:21:12.836480] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:04.247 [2024-07-24 18:21:12.836594] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdb4890 00:18:04.247 [2024-07-24 18:21:12.836686] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xdb01c0 00:18:04.247 [2024-07-24 18:21:12.836693] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xdb01c0 00:18:04.247 [2024-07-24 18:21:12.836755] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.247 NewBaseBdev 00:18:04.506 18:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:04.506 18:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:04.506 18:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:04.506 18:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:04.506 18:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:04.506 18:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:04.506 18:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.506 18:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:04.765 [ 00:18:04.765 { 00:18:04.765 "name": "NewBaseBdev", 00:18:04.765 "aliases": [ 00:18:04.765 "527eaa92-039f-4f73-93fc-65b7864139db" 00:18:04.765 ], 00:18:04.765 "product_name": "Malloc disk", 00:18:04.765 "block_size": 512, 00:18:04.765 "num_blocks": 65536, 00:18:04.765 "uuid": "527eaa92-039f-4f73-93fc-65b7864139db", 00:18:04.765 "assigned_rate_limits": { 00:18:04.765 "rw_ios_per_sec": 0, 00:18:04.765 "rw_mbytes_per_sec": 0, 00:18:04.765 "r_mbytes_per_sec": 0, 00:18:04.765 "w_mbytes_per_sec": 0 00:18:04.765 }, 00:18:04.765 "claimed": true, 00:18:04.765 "claim_type": "exclusive_write", 00:18:04.765 "zoned": false, 00:18:04.765 "supported_io_types": { 00:18:04.765 "read": true, 00:18:04.765 "write": true, 00:18:04.765 "unmap": true, 00:18:04.765 "flush": true, 00:18:04.765 "reset": true, 00:18:04.765 "nvme_admin": false, 00:18:04.765 "nvme_io": false, 00:18:04.765 "nvme_io_md": false, 00:18:04.765 "write_zeroes": true, 00:18:04.765 "zcopy": true, 00:18:04.765 "get_zone_info": false, 00:18:04.765 "zone_management": false, 00:18:04.765 "zone_append": false, 00:18:04.765 "compare": false, 00:18:04.766 "compare_and_write": false, 00:18:04.766 "abort": true, 00:18:04.766 "seek_hole": false, 00:18:04.766 "seek_data": false, 00:18:04.766 "copy": true, 00:18:04.766 "nvme_iov_md": false 00:18:04.766 }, 00:18:04.766 "memory_domains": [ 00:18:04.766 { 00:18:04.766 "dma_device_id": "system", 00:18:04.766 "dma_device_type": 1 00:18:04.766 }, 00:18:04.766 { 00:18:04.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.766 "dma_device_type": 2 00:18:04.766 } 00:18:04.766 ], 00:18:04.766 "driver_specific": {} 00:18:04.766 } 00:18:04.766 ] 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:04.766 "name": "Existed_Raid", 00:18:04.766 "uuid": "96e01449-980d-41f9-87ed-5f2222434474", 00:18:04.766 "strip_size_kb": 0, 00:18:04.766 "state": "online", 00:18:04.766 "raid_level": "raid1", 00:18:04.766 "superblock": true, 00:18:04.766 "num_base_bdevs": 4, 00:18:04.766 "num_base_bdevs_discovered": 4, 00:18:04.766 "num_base_bdevs_operational": 4, 00:18:04.766 "base_bdevs_list": [ 00:18:04.766 { 00:18:04.766 "name": "NewBaseBdev", 00:18:04.766 "uuid": "527eaa92-039f-4f73-93fc-65b7864139db", 00:18:04.766 "is_configured": true, 00:18:04.766 "data_offset": 2048, 00:18:04.766 "data_size": 63488 00:18:04.766 }, 00:18:04.766 { 00:18:04.766 "name": "BaseBdev2", 00:18:04.766 "uuid": "15dc4477-fb06-4c1d-8c3b-46d752b8b702", 00:18:04.766 "is_configured": true, 00:18:04.766 "data_offset": 2048, 00:18:04.766 "data_size": 63488 00:18:04.766 }, 00:18:04.766 { 00:18:04.766 "name": "BaseBdev3", 00:18:04.766 "uuid": "580cc9ad-4952-4ef6-b32b-32b0e3506cf5", 00:18:04.766 "is_configured": true, 00:18:04.766 "data_offset": 2048, 00:18:04.766 "data_size": 63488 00:18:04.766 }, 00:18:04.766 { 00:18:04.766 "name": "BaseBdev4", 00:18:04.766 "uuid": "43d8465d-84e1-4b66-adb0-11fd0c091fb1", 00:18:04.766 "is_configured": true, 00:18:04.766 "data_offset": 2048, 00:18:04.766 "data_size": 63488 00:18:04.766 } 00:18:04.766 ] 00:18:04.766 }' 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:04.766 18:21:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.334 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:05.334 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:05.334 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:05.334 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:05.334 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:05.334 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:05.334 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:05.334 18:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:05.593 [2024-07-24 18:21:14.003587] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.593 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:05.593 "name": "Existed_Raid", 00:18:05.593 "aliases": [ 00:18:05.593 "96e01449-980d-41f9-87ed-5f2222434474" 00:18:05.593 ], 00:18:05.593 "product_name": "Raid Volume", 00:18:05.593 "block_size": 512, 00:18:05.593 "num_blocks": 63488, 00:18:05.593 "uuid": "96e01449-980d-41f9-87ed-5f2222434474", 00:18:05.593 "assigned_rate_limits": { 00:18:05.593 "rw_ios_per_sec": 0, 00:18:05.593 "rw_mbytes_per_sec": 0, 00:18:05.593 "r_mbytes_per_sec": 0, 00:18:05.593 "w_mbytes_per_sec": 0 00:18:05.593 }, 00:18:05.593 "claimed": false, 00:18:05.593 "zoned": false, 00:18:05.593 "supported_io_types": { 00:18:05.593 "read": true, 00:18:05.593 "write": true, 00:18:05.593 "unmap": false, 00:18:05.593 "flush": false, 00:18:05.593 "reset": true, 00:18:05.593 "nvme_admin": false, 00:18:05.593 "nvme_io": false, 00:18:05.593 "nvme_io_md": false, 00:18:05.593 "write_zeroes": true, 00:18:05.593 "zcopy": false, 00:18:05.593 "get_zone_info": false, 00:18:05.593 "zone_management": false, 00:18:05.593 "zone_append": false, 00:18:05.593 "compare": false, 00:18:05.593 "compare_and_write": false, 00:18:05.593 "abort": false, 00:18:05.593 "seek_hole": false, 00:18:05.593 "seek_data": false, 00:18:05.593 "copy": false, 00:18:05.593 "nvme_iov_md": false 00:18:05.593 }, 00:18:05.593 "memory_domains": [ 00:18:05.593 { 00:18:05.593 "dma_device_id": "system", 00:18:05.593 "dma_device_type": 1 00:18:05.593 }, 00:18:05.593 { 00:18:05.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.593 "dma_device_type": 2 00:18:05.593 }, 00:18:05.593 { 00:18:05.593 "dma_device_id": "system", 00:18:05.593 "dma_device_type": 1 00:18:05.593 }, 00:18:05.593 { 00:18:05.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.593 "dma_device_type": 2 00:18:05.593 }, 00:18:05.593 { 00:18:05.593 "dma_device_id": "system", 00:18:05.593 "dma_device_type": 1 00:18:05.593 }, 00:18:05.593 { 00:18:05.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.593 "dma_device_type": 2 00:18:05.593 }, 00:18:05.593 { 00:18:05.593 "dma_device_id": "system", 00:18:05.593 "dma_device_type": 1 00:18:05.593 }, 00:18:05.593 { 00:18:05.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.593 "dma_device_type": 2 00:18:05.593 } 00:18:05.593 ], 00:18:05.593 "driver_specific": { 00:18:05.593 "raid": { 00:18:05.593 "uuid": "96e01449-980d-41f9-87ed-5f2222434474", 00:18:05.593 "strip_size_kb": 0, 00:18:05.593 "state": "online", 00:18:05.593 "raid_level": "raid1", 00:18:05.593 "superblock": true, 00:18:05.593 "num_base_bdevs": 4, 00:18:05.593 "num_base_bdevs_discovered": 4, 00:18:05.593 "num_base_bdevs_operational": 4, 00:18:05.593 "base_bdevs_list": [ 00:18:05.593 { 00:18:05.593 "name": "NewBaseBdev", 00:18:05.593 "uuid": "527eaa92-039f-4f73-93fc-65b7864139db", 00:18:05.593 "is_configured": true, 00:18:05.593 "data_offset": 2048, 00:18:05.593 "data_size": 63488 00:18:05.593 }, 00:18:05.593 { 00:18:05.593 "name": "BaseBdev2", 00:18:05.593 "uuid": "15dc4477-fb06-4c1d-8c3b-46d752b8b702", 00:18:05.593 "is_configured": true, 00:18:05.593 "data_offset": 2048, 00:18:05.593 "data_size": 63488 00:18:05.593 }, 00:18:05.593 { 00:18:05.593 "name": "BaseBdev3", 00:18:05.593 "uuid": "580cc9ad-4952-4ef6-b32b-32b0e3506cf5", 00:18:05.593 "is_configured": true, 00:18:05.593 "data_offset": 2048, 00:18:05.593 "data_size": 63488 00:18:05.593 }, 00:18:05.593 { 00:18:05.593 "name": "BaseBdev4", 00:18:05.593 "uuid": "43d8465d-84e1-4b66-adb0-11fd0c091fb1", 00:18:05.593 "is_configured": true, 00:18:05.593 "data_offset": 2048, 00:18:05.593 "data_size": 63488 00:18:05.593 } 00:18:05.593 ] 00:18:05.593 } 00:18:05.593 } 00:18:05.593 }' 00:18:05.593 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:05.593 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:05.593 BaseBdev2 00:18:05.593 BaseBdev3 00:18:05.593 BaseBdev4' 00:18:05.593 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:05.593 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:05.593 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:05.852 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:05.852 "name": "NewBaseBdev", 00:18:05.852 "aliases": [ 00:18:05.852 "527eaa92-039f-4f73-93fc-65b7864139db" 00:18:05.852 ], 00:18:05.852 "product_name": "Malloc disk", 00:18:05.852 "block_size": 512, 00:18:05.852 "num_blocks": 65536, 00:18:05.852 "uuid": "527eaa92-039f-4f73-93fc-65b7864139db", 00:18:05.852 "assigned_rate_limits": { 00:18:05.852 "rw_ios_per_sec": 0, 00:18:05.852 "rw_mbytes_per_sec": 0, 00:18:05.852 "r_mbytes_per_sec": 0, 00:18:05.852 "w_mbytes_per_sec": 0 00:18:05.852 }, 00:18:05.852 "claimed": true, 00:18:05.852 "claim_type": "exclusive_write", 00:18:05.852 "zoned": false, 00:18:05.852 "supported_io_types": { 00:18:05.852 "read": true, 00:18:05.852 "write": true, 00:18:05.852 "unmap": true, 00:18:05.852 "flush": true, 00:18:05.852 "reset": true, 00:18:05.852 "nvme_admin": false, 00:18:05.852 "nvme_io": false, 00:18:05.852 "nvme_io_md": false, 00:18:05.852 "write_zeroes": true, 00:18:05.852 "zcopy": true, 00:18:05.852 "get_zone_info": false, 00:18:05.852 "zone_management": false, 00:18:05.852 "zone_append": false, 00:18:05.852 "compare": false, 00:18:05.852 "compare_and_write": false, 00:18:05.852 "abort": true, 00:18:05.852 "seek_hole": false, 00:18:05.852 "seek_data": false, 00:18:05.852 "copy": true, 00:18:05.852 "nvme_iov_md": false 00:18:05.852 }, 00:18:05.852 "memory_domains": [ 00:18:05.852 { 00:18:05.852 "dma_device_id": "system", 00:18:05.852 "dma_device_type": 1 00:18:05.852 }, 00:18:05.852 { 00:18:05.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.852 "dma_device_type": 2 00:18:05.852 } 00:18:05.852 ], 00:18:05.852 "driver_specific": {} 00:18:05.852 }' 00:18:05.852 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.852 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.852 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:05.852 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.852 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.852 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:05.852 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.112 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.112 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.112 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.112 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.112 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.112 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.112 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:06.112 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.371 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.371 "name": "BaseBdev2", 00:18:06.371 "aliases": [ 00:18:06.371 "15dc4477-fb06-4c1d-8c3b-46d752b8b702" 00:18:06.371 ], 00:18:06.371 "product_name": "Malloc disk", 00:18:06.371 "block_size": 512, 00:18:06.371 "num_blocks": 65536, 00:18:06.371 "uuid": "15dc4477-fb06-4c1d-8c3b-46d752b8b702", 00:18:06.371 "assigned_rate_limits": { 00:18:06.371 "rw_ios_per_sec": 0, 00:18:06.371 "rw_mbytes_per_sec": 0, 00:18:06.371 "r_mbytes_per_sec": 0, 00:18:06.371 "w_mbytes_per_sec": 0 00:18:06.371 }, 00:18:06.371 "claimed": true, 00:18:06.371 "claim_type": "exclusive_write", 00:18:06.371 "zoned": false, 00:18:06.371 "supported_io_types": { 00:18:06.371 "read": true, 00:18:06.371 "write": true, 00:18:06.371 "unmap": true, 00:18:06.371 "flush": true, 00:18:06.371 "reset": true, 00:18:06.371 "nvme_admin": false, 00:18:06.371 "nvme_io": false, 00:18:06.371 "nvme_io_md": false, 00:18:06.371 "write_zeroes": true, 00:18:06.371 "zcopy": true, 00:18:06.371 "get_zone_info": false, 00:18:06.371 "zone_management": false, 00:18:06.371 "zone_append": false, 00:18:06.371 "compare": false, 00:18:06.371 "compare_and_write": false, 00:18:06.371 "abort": true, 00:18:06.371 "seek_hole": false, 00:18:06.371 "seek_data": false, 00:18:06.371 "copy": true, 00:18:06.371 "nvme_iov_md": false 00:18:06.371 }, 00:18:06.371 "memory_domains": [ 00:18:06.371 { 00:18:06.371 "dma_device_id": "system", 00:18:06.371 "dma_device_type": 1 00:18:06.371 }, 00:18:06.371 { 00:18:06.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.371 "dma_device_type": 2 00:18:06.371 } 00:18:06.371 ], 00:18:06.371 "driver_specific": {} 00:18:06.371 }' 00:18:06.371 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.371 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.371 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:06.371 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.371 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.371 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:06.371 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.371 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.371 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.691 18:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.691 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.691 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.691 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.691 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.691 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:06.691 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.691 "name": "BaseBdev3", 00:18:06.691 "aliases": [ 00:18:06.691 "580cc9ad-4952-4ef6-b32b-32b0e3506cf5" 00:18:06.691 ], 00:18:06.691 "product_name": "Malloc disk", 00:18:06.691 "block_size": 512, 00:18:06.691 "num_blocks": 65536, 00:18:06.691 "uuid": "580cc9ad-4952-4ef6-b32b-32b0e3506cf5", 00:18:06.691 "assigned_rate_limits": { 00:18:06.691 "rw_ios_per_sec": 0, 00:18:06.691 "rw_mbytes_per_sec": 0, 00:18:06.691 "r_mbytes_per_sec": 0, 00:18:06.691 "w_mbytes_per_sec": 0 00:18:06.691 }, 00:18:06.691 "claimed": true, 00:18:06.691 "claim_type": "exclusive_write", 00:18:06.691 "zoned": false, 00:18:06.691 "supported_io_types": { 00:18:06.691 "read": true, 00:18:06.691 "write": true, 00:18:06.691 "unmap": true, 00:18:06.691 "flush": true, 00:18:06.691 "reset": true, 00:18:06.691 "nvme_admin": false, 00:18:06.691 "nvme_io": false, 00:18:06.691 "nvme_io_md": false, 00:18:06.691 "write_zeroes": true, 00:18:06.691 "zcopy": true, 00:18:06.691 "get_zone_info": false, 00:18:06.691 "zone_management": false, 00:18:06.691 "zone_append": false, 00:18:06.691 "compare": false, 00:18:06.691 "compare_and_write": false, 00:18:06.691 "abort": true, 00:18:06.691 "seek_hole": false, 00:18:06.691 "seek_data": false, 00:18:06.691 "copy": true, 00:18:06.691 "nvme_iov_md": false 00:18:06.691 }, 00:18:06.691 "memory_domains": [ 00:18:06.691 { 00:18:06.691 "dma_device_id": "system", 00:18:06.691 "dma_device_type": 1 00:18:06.691 }, 00:18:06.691 { 00:18:06.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.691 "dma_device_type": 2 00:18:06.691 } 00:18:06.691 ], 00:18:06.691 "driver_specific": {} 00:18:06.691 }' 00:18:06.691 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.691 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.953 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:06.953 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.953 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.953 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:06.953 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.953 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.953 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.953 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.953 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.953 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.953 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.953 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:06.953 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:07.212 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:07.212 "name": "BaseBdev4", 00:18:07.212 "aliases": [ 00:18:07.212 "43d8465d-84e1-4b66-adb0-11fd0c091fb1" 00:18:07.212 ], 00:18:07.212 "product_name": "Malloc disk", 00:18:07.212 "block_size": 512, 00:18:07.212 "num_blocks": 65536, 00:18:07.212 "uuid": "43d8465d-84e1-4b66-adb0-11fd0c091fb1", 00:18:07.212 "assigned_rate_limits": { 00:18:07.212 "rw_ios_per_sec": 0, 00:18:07.212 "rw_mbytes_per_sec": 0, 00:18:07.212 "r_mbytes_per_sec": 0, 00:18:07.212 "w_mbytes_per_sec": 0 00:18:07.212 }, 00:18:07.212 "claimed": true, 00:18:07.212 "claim_type": "exclusive_write", 00:18:07.212 "zoned": false, 00:18:07.212 "supported_io_types": { 00:18:07.212 "read": true, 00:18:07.212 "write": true, 00:18:07.212 "unmap": true, 00:18:07.212 "flush": true, 00:18:07.212 "reset": true, 00:18:07.212 "nvme_admin": false, 00:18:07.212 "nvme_io": false, 00:18:07.212 "nvme_io_md": false, 00:18:07.212 "write_zeroes": true, 00:18:07.212 "zcopy": true, 00:18:07.212 "get_zone_info": false, 00:18:07.212 "zone_management": false, 00:18:07.212 "zone_append": false, 00:18:07.212 "compare": false, 00:18:07.212 "compare_and_write": false, 00:18:07.212 "abort": true, 00:18:07.212 "seek_hole": false, 00:18:07.212 "seek_data": false, 00:18:07.212 "copy": true, 00:18:07.212 "nvme_iov_md": false 00:18:07.212 }, 00:18:07.212 "memory_domains": [ 00:18:07.212 { 00:18:07.212 "dma_device_id": "system", 00:18:07.212 "dma_device_type": 1 00:18:07.212 }, 00:18:07.212 { 00:18:07.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.212 "dma_device_type": 2 00:18:07.212 } 00:18:07.212 ], 00:18:07.212 "driver_specific": {} 00:18:07.212 }' 00:18:07.212 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.212 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.212 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:07.212 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.472 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.472 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:07.472 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.472 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.472 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:07.472 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.472 18:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.472 18:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:07.472 18:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:07.731 [2024-07-24 18:21:16.160940] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.731 [2024-07-24 18:21:16.160960] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.731 [2024-07-24 18:21:16.161002] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.731 [2024-07-24 18:21:16.161193] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.731 [2024-07-24 18:21:16.161202] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdb01c0 name Existed_Raid, state offline 00:18:07.731 18:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 2245853 00:18:07.731 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2245853 ']' 00:18:07.731 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 2245853 00:18:07.731 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:07.731 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.731 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2245853 00:18:07.731 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:07.731 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:07.731 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2245853' 00:18:07.731 killing process with pid 2245853 00:18:07.731 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 2245853 00:18:07.731 [2024-07-24 18:21:16.226573] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.731 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 2245853 00:18:07.731 [2024-07-24 18:21:16.255848] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.991 18:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:07.991 00:18:07.991 real 0m24.540s 00:18:07.991 user 0m44.841s 00:18:07.991 sys 0m4.739s 00:18:07.991 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:07.991 18:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.991 ************************************ 00:18:07.991 END TEST raid_state_function_test_sb 00:18:07.991 ************************************ 00:18:07.991 18:21:16 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:18:07.991 18:21:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:07.991 18:21:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:07.991 18:21:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:07.991 ************************************ 00:18:07.991 START TEST raid_superblock_test 00:18:07.991 ************************************ 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=2251214 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 2251214 /var/tmp/spdk-raid.sock 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 2251214 ']' 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:07.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:07.991 18:21:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.991 [2024-07-24 18:21:16.561000] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:18:07.991 [2024-07-24 18:21:16.561043] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251214 ] 00:18:08.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.251 EAL: Requested device 0000:b3:01.0 cannot be used 00:18:08.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.251 EAL: Requested device 0000:b3:01.1 cannot be used 00:18:08.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.251 EAL: Requested device 0000:b3:01.2 cannot be used 00:18:08.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.251 EAL: Requested device 0000:b3:01.3 cannot be used 00:18:08.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.251 EAL: Requested device 0000:b3:01.4 cannot be used 00:18:08.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.251 EAL: Requested device 0000:b3:01.5 cannot be used 00:18:08.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.251 EAL: Requested device 0000:b3:01.6 cannot be used 00:18:08.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.251 EAL: Requested device 0000:b3:01.7 cannot be used 00:18:08.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.251 EAL: Requested device 0000:b3:02.0 cannot be used 00:18:08.251 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b3:02.1 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b3:02.2 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b3:02.3 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b3:02.4 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b3:02.5 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b3:02.6 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b3:02.7 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:01.0 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:01.1 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:01.2 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:01.3 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:01.4 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:01.5 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:01.6 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:01.7 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:02.0 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:02.1 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:02.2 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:02.3 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:02.4 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:02.5 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:02.6 cannot be used 00:18:08.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:08.252 EAL: Requested device 0000:b5:02.7 cannot be used 00:18:08.252 [2024-07-24 18:21:16.652956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.252 [2024-07-24 18:21:16.730337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.252 [2024-07-24 18:21:16.783604] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.252 [2024-07-24 18:21:16.783633] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.821 18:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:08.821 18:21:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:08.821 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:08.821 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:08.821 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:08.821 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:08.821 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:08.821 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:08.821 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:08.821 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:08.821 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:09.081 malloc1 00:18:09.081 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:09.340 [2024-07-24 18:21:17.683907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:09.340 [2024-07-24 18:21:17.683941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.340 [2024-07-24 18:21:17.683955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x173ecb0 00:18:09.340 [2024-07-24 18:21:17.683963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.340 [2024-07-24 18:21:17.685109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.340 [2024-07-24 18:21:17.685132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:09.340 pt1 00:18:09.340 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:09.340 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:09.340 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:09.340 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:09.340 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:09.340 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:09.340 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:09.340 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:09.340 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:09.340 malloc2 00:18:09.340 18:21:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:09.600 [2024-07-24 18:21:18.028551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:09.600 [2024-07-24 18:21:18.028583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.600 [2024-07-24 18:21:18.028594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17400b0 00:18:09.600 [2024-07-24 18:21:18.028602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.600 [2024-07-24 18:21:18.029679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.600 [2024-07-24 18:21:18.029702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:09.600 pt2 00:18:09.600 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:09.600 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:09.600 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:18:09.600 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:18:09.600 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:09.600 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:09.600 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:09.600 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:09.600 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:09.859 malloc3 00:18:09.859 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:09.859 [2024-07-24 18:21:18.368951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:09.859 [2024-07-24 18:21:18.368983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.859 [2024-07-24 18:21:18.368994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18d6a80 00:18:09.859 [2024-07-24 18:21:18.369003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.859 [2024-07-24 18:21:18.370047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.859 [2024-07-24 18:21:18.370070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:09.859 pt3 00:18:09.859 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:09.859 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:09.859 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:18:09.859 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:18:09.859 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:09.859 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:09.859 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:09.859 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:09.859 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:10.118 malloc4 00:18:10.118 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:10.119 [2024-07-24 18:21:18.705299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:10.119 [2024-07-24 18:21:18.705331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.119 [2024-07-24 18:21:18.705343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18d93a0 00:18:10.119 [2024-07-24 18:21:18.705351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.119 [2024-07-24 18:21:18.706366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.119 [2024-07-24 18:21:18.706393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:10.119 pt4 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:10.378 [2024-07-24 18:21:18.873758] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:10.378 [2024-07-24 18:21:18.874613] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.378 [2024-07-24 18:21:18.874668] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:10.378 [2024-07-24 18:21:18.874696] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:10.378 [2024-07-24 18:21:18.874806] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1736c70 00:18:10.378 [2024-07-24 18:21:18.874813] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:10.378 [2024-07-24 18:21:18.874941] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1734eb0 00:18:10.378 [2024-07-24 18:21:18.875042] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1736c70 00:18:10.378 [2024-07-24 18:21:18.875048] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1736c70 00:18:10.378 [2024-07-24 18:21:18.875110] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.378 18:21:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.638 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.638 "name": "raid_bdev1", 00:18:10.638 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:10.638 "strip_size_kb": 0, 00:18:10.638 "state": "online", 00:18:10.638 "raid_level": "raid1", 00:18:10.638 "superblock": true, 00:18:10.638 "num_base_bdevs": 4, 00:18:10.638 "num_base_bdevs_discovered": 4, 00:18:10.638 "num_base_bdevs_operational": 4, 00:18:10.638 "base_bdevs_list": [ 00:18:10.638 { 00:18:10.638 "name": "pt1", 00:18:10.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:10.638 "is_configured": true, 00:18:10.638 "data_offset": 2048, 00:18:10.638 "data_size": 63488 00:18:10.638 }, 00:18:10.638 { 00:18:10.638 "name": "pt2", 00:18:10.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.638 "is_configured": true, 00:18:10.638 "data_offset": 2048, 00:18:10.638 "data_size": 63488 00:18:10.638 }, 00:18:10.638 { 00:18:10.638 "name": "pt3", 00:18:10.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:10.638 "is_configured": true, 00:18:10.638 "data_offset": 2048, 00:18:10.638 "data_size": 63488 00:18:10.638 }, 00:18:10.638 { 00:18:10.638 "name": "pt4", 00:18:10.638 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:10.638 "is_configured": true, 00:18:10.638 "data_offset": 2048, 00:18:10.638 "data_size": 63488 00:18:10.638 } 00:18:10.638 ] 00:18:10.638 }' 00:18:10.638 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.638 18:21:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.208 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:11.208 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:11.208 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:11.208 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:11.208 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:11.208 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:11.208 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:11.208 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:11.208 [2024-07-24 18:21:19.688020] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.208 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:11.208 "name": "raid_bdev1", 00:18:11.208 "aliases": [ 00:18:11.208 "b0549f93-d576-4e06-af56-c07111667e01" 00:18:11.208 ], 00:18:11.208 "product_name": "Raid Volume", 00:18:11.208 "block_size": 512, 00:18:11.208 "num_blocks": 63488, 00:18:11.208 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:11.208 "assigned_rate_limits": { 00:18:11.208 "rw_ios_per_sec": 0, 00:18:11.208 "rw_mbytes_per_sec": 0, 00:18:11.208 "r_mbytes_per_sec": 0, 00:18:11.208 "w_mbytes_per_sec": 0 00:18:11.208 }, 00:18:11.208 "claimed": false, 00:18:11.208 "zoned": false, 00:18:11.208 "supported_io_types": { 00:18:11.208 "read": true, 00:18:11.208 "write": true, 00:18:11.208 "unmap": false, 00:18:11.208 "flush": false, 00:18:11.208 "reset": true, 00:18:11.208 "nvme_admin": false, 00:18:11.208 "nvme_io": false, 00:18:11.208 "nvme_io_md": false, 00:18:11.208 "write_zeroes": true, 00:18:11.208 "zcopy": false, 00:18:11.208 "get_zone_info": false, 00:18:11.208 "zone_management": false, 00:18:11.208 "zone_append": false, 00:18:11.208 "compare": false, 00:18:11.208 "compare_and_write": false, 00:18:11.208 "abort": false, 00:18:11.208 "seek_hole": false, 00:18:11.208 "seek_data": false, 00:18:11.208 "copy": false, 00:18:11.208 "nvme_iov_md": false 00:18:11.208 }, 00:18:11.208 "memory_domains": [ 00:18:11.208 { 00:18:11.208 "dma_device_id": "system", 00:18:11.208 "dma_device_type": 1 00:18:11.208 }, 00:18:11.208 { 00:18:11.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.208 "dma_device_type": 2 00:18:11.208 }, 00:18:11.208 { 00:18:11.208 "dma_device_id": "system", 00:18:11.208 "dma_device_type": 1 00:18:11.208 }, 00:18:11.208 { 00:18:11.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.208 "dma_device_type": 2 00:18:11.208 }, 00:18:11.208 { 00:18:11.208 "dma_device_id": "system", 00:18:11.208 "dma_device_type": 1 00:18:11.208 }, 00:18:11.208 { 00:18:11.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.208 "dma_device_type": 2 00:18:11.208 }, 00:18:11.208 { 00:18:11.209 "dma_device_id": "system", 00:18:11.209 "dma_device_type": 1 00:18:11.209 }, 00:18:11.209 { 00:18:11.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.209 "dma_device_type": 2 00:18:11.209 } 00:18:11.209 ], 00:18:11.209 "driver_specific": { 00:18:11.209 "raid": { 00:18:11.209 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:11.209 "strip_size_kb": 0, 00:18:11.209 "state": "online", 00:18:11.209 "raid_level": "raid1", 00:18:11.209 "superblock": true, 00:18:11.209 "num_base_bdevs": 4, 00:18:11.209 "num_base_bdevs_discovered": 4, 00:18:11.209 "num_base_bdevs_operational": 4, 00:18:11.209 "base_bdevs_list": [ 00:18:11.209 { 00:18:11.209 "name": "pt1", 00:18:11.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.209 "is_configured": true, 00:18:11.209 "data_offset": 2048, 00:18:11.209 "data_size": 63488 00:18:11.209 }, 00:18:11.209 { 00:18:11.209 "name": "pt2", 00:18:11.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.209 "is_configured": true, 00:18:11.209 "data_offset": 2048, 00:18:11.209 "data_size": 63488 00:18:11.209 }, 00:18:11.209 { 00:18:11.209 "name": "pt3", 00:18:11.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.209 "is_configured": true, 00:18:11.209 "data_offset": 2048, 00:18:11.209 "data_size": 63488 00:18:11.209 }, 00:18:11.209 { 00:18:11.209 "name": "pt4", 00:18:11.209 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:11.209 "is_configured": true, 00:18:11.209 "data_offset": 2048, 00:18:11.209 "data_size": 63488 00:18:11.209 } 00:18:11.209 ] 00:18:11.209 } 00:18:11.209 } 00:18:11.209 }' 00:18:11.209 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:11.209 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:11.209 pt2 00:18:11.209 pt3 00:18:11.209 pt4' 00:18:11.209 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:11.209 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:11.209 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:11.468 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:11.468 "name": "pt1", 00:18:11.468 "aliases": [ 00:18:11.468 "00000000-0000-0000-0000-000000000001" 00:18:11.468 ], 00:18:11.468 "product_name": "passthru", 00:18:11.468 "block_size": 512, 00:18:11.468 "num_blocks": 65536, 00:18:11.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.468 "assigned_rate_limits": { 00:18:11.468 "rw_ios_per_sec": 0, 00:18:11.468 "rw_mbytes_per_sec": 0, 00:18:11.468 "r_mbytes_per_sec": 0, 00:18:11.468 "w_mbytes_per_sec": 0 00:18:11.468 }, 00:18:11.468 "claimed": true, 00:18:11.468 "claim_type": "exclusive_write", 00:18:11.468 "zoned": false, 00:18:11.468 "supported_io_types": { 00:18:11.468 "read": true, 00:18:11.468 "write": true, 00:18:11.468 "unmap": true, 00:18:11.468 "flush": true, 00:18:11.468 "reset": true, 00:18:11.468 "nvme_admin": false, 00:18:11.468 "nvme_io": false, 00:18:11.468 "nvme_io_md": false, 00:18:11.468 "write_zeroes": true, 00:18:11.468 "zcopy": true, 00:18:11.468 "get_zone_info": false, 00:18:11.468 "zone_management": false, 00:18:11.468 "zone_append": false, 00:18:11.468 "compare": false, 00:18:11.468 "compare_and_write": false, 00:18:11.468 "abort": true, 00:18:11.469 "seek_hole": false, 00:18:11.469 "seek_data": false, 00:18:11.469 "copy": true, 00:18:11.469 "nvme_iov_md": false 00:18:11.469 }, 00:18:11.469 "memory_domains": [ 00:18:11.469 { 00:18:11.469 "dma_device_id": "system", 00:18:11.469 "dma_device_type": 1 00:18:11.469 }, 00:18:11.469 { 00:18:11.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.469 "dma_device_type": 2 00:18:11.469 } 00:18:11.469 ], 00:18:11.469 "driver_specific": { 00:18:11.469 "passthru": { 00:18:11.469 "name": "pt1", 00:18:11.469 "base_bdev_name": "malloc1" 00:18:11.469 } 00:18:11.469 } 00:18:11.469 }' 00:18:11.469 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:11.469 18:21:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:11.469 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:11.469 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:11.469 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:11.727 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:11.727 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:11.727 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:11.727 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:11.727 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:11.727 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:11.727 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:11.727 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:11.727 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:11.727 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:11.986 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:11.986 "name": "pt2", 00:18:11.986 "aliases": [ 00:18:11.986 "00000000-0000-0000-0000-000000000002" 00:18:11.986 ], 00:18:11.986 "product_name": "passthru", 00:18:11.986 "block_size": 512, 00:18:11.986 "num_blocks": 65536, 00:18:11.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.986 "assigned_rate_limits": { 00:18:11.986 "rw_ios_per_sec": 0, 00:18:11.986 "rw_mbytes_per_sec": 0, 00:18:11.986 "r_mbytes_per_sec": 0, 00:18:11.986 "w_mbytes_per_sec": 0 00:18:11.986 }, 00:18:11.986 "claimed": true, 00:18:11.986 "claim_type": "exclusive_write", 00:18:11.986 "zoned": false, 00:18:11.986 "supported_io_types": { 00:18:11.986 "read": true, 00:18:11.986 "write": true, 00:18:11.986 "unmap": true, 00:18:11.986 "flush": true, 00:18:11.986 "reset": true, 00:18:11.986 "nvme_admin": false, 00:18:11.986 "nvme_io": false, 00:18:11.986 "nvme_io_md": false, 00:18:11.986 "write_zeroes": true, 00:18:11.986 "zcopy": true, 00:18:11.986 "get_zone_info": false, 00:18:11.986 "zone_management": false, 00:18:11.986 "zone_append": false, 00:18:11.986 "compare": false, 00:18:11.986 "compare_and_write": false, 00:18:11.986 "abort": true, 00:18:11.986 "seek_hole": false, 00:18:11.986 "seek_data": false, 00:18:11.986 "copy": true, 00:18:11.986 "nvme_iov_md": false 00:18:11.986 }, 00:18:11.986 "memory_domains": [ 00:18:11.986 { 00:18:11.986 "dma_device_id": "system", 00:18:11.986 "dma_device_type": 1 00:18:11.986 }, 00:18:11.986 { 00:18:11.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.986 "dma_device_type": 2 00:18:11.986 } 00:18:11.986 ], 00:18:11.986 "driver_specific": { 00:18:11.986 "passthru": { 00:18:11.986 "name": "pt2", 00:18:11.986 "base_bdev_name": "malloc2" 00:18:11.986 } 00:18:11.986 } 00:18:11.986 }' 00:18:11.986 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:11.986 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:11.986 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:11.986 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:11.986 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:12.245 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:12.245 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:12.245 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:12.245 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:12.245 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.245 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.245 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:12.245 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:12.245 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:12.245 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:12.504 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:12.504 "name": "pt3", 00:18:12.504 "aliases": [ 00:18:12.504 "00000000-0000-0000-0000-000000000003" 00:18:12.504 ], 00:18:12.504 "product_name": "passthru", 00:18:12.504 "block_size": 512, 00:18:12.504 "num_blocks": 65536, 00:18:12.504 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.504 "assigned_rate_limits": { 00:18:12.504 "rw_ios_per_sec": 0, 00:18:12.504 "rw_mbytes_per_sec": 0, 00:18:12.504 "r_mbytes_per_sec": 0, 00:18:12.504 "w_mbytes_per_sec": 0 00:18:12.504 }, 00:18:12.504 "claimed": true, 00:18:12.504 "claim_type": "exclusive_write", 00:18:12.504 "zoned": false, 00:18:12.504 "supported_io_types": { 00:18:12.504 "read": true, 00:18:12.504 "write": true, 00:18:12.504 "unmap": true, 00:18:12.504 "flush": true, 00:18:12.504 "reset": true, 00:18:12.504 "nvme_admin": false, 00:18:12.504 "nvme_io": false, 00:18:12.504 "nvme_io_md": false, 00:18:12.504 "write_zeroes": true, 00:18:12.504 "zcopy": true, 00:18:12.504 "get_zone_info": false, 00:18:12.504 "zone_management": false, 00:18:12.504 "zone_append": false, 00:18:12.504 "compare": false, 00:18:12.504 "compare_and_write": false, 00:18:12.504 "abort": true, 00:18:12.504 "seek_hole": false, 00:18:12.504 "seek_data": false, 00:18:12.504 "copy": true, 00:18:12.504 "nvme_iov_md": false 00:18:12.504 }, 00:18:12.504 "memory_domains": [ 00:18:12.504 { 00:18:12.504 "dma_device_id": "system", 00:18:12.504 "dma_device_type": 1 00:18:12.504 }, 00:18:12.504 { 00:18:12.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.504 "dma_device_type": 2 00:18:12.504 } 00:18:12.504 ], 00:18:12.504 "driver_specific": { 00:18:12.504 "passthru": { 00:18:12.504 "name": "pt3", 00:18:12.505 "base_bdev_name": "malloc3" 00:18:12.505 } 00:18:12.505 } 00:18:12.505 }' 00:18:12.505 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:12.505 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:12.505 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:12.505 18:21:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:12.505 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:12.505 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:12.505 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:12.764 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:12.764 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:12.764 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.764 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:12.764 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:12.764 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:12.764 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:12.764 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:13.024 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:13.024 "name": "pt4", 00:18:13.024 "aliases": [ 00:18:13.024 "00000000-0000-0000-0000-000000000004" 00:18:13.024 ], 00:18:13.024 "product_name": "passthru", 00:18:13.024 "block_size": 512, 00:18:13.024 "num_blocks": 65536, 00:18:13.024 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:13.024 "assigned_rate_limits": { 00:18:13.024 "rw_ios_per_sec": 0, 00:18:13.024 "rw_mbytes_per_sec": 0, 00:18:13.024 "r_mbytes_per_sec": 0, 00:18:13.024 "w_mbytes_per_sec": 0 00:18:13.024 }, 00:18:13.024 "claimed": true, 00:18:13.024 "claim_type": "exclusive_write", 00:18:13.024 "zoned": false, 00:18:13.024 "supported_io_types": { 00:18:13.024 "read": true, 00:18:13.024 "write": true, 00:18:13.024 "unmap": true, 00:18:13.024 "flush": true, 00:18:13.024 "reset": true, 00:18:13.024 "nvme_admin": false, 00:18:13.024 "nvme_io": false, 00:18:13.024 "nvme_io_md": false, 00:18:13.024 "write_zeroes": true, 00:18:13.024 "zcopy": true, 00:18:13.024 "get_zone_info": false, 00:18:13.024 "zone_management": false, 00:18:13.024 "zone_append": false, 00:18:13.024 "compare": false, 00:18:13.024 "compare_and_write": false, 00:18:13.024 "abort": true, 00:18:13.024 "seek_hole": false, 00:18:13.024 "seek_data": false, 00:18:13.024 "copy": true, 00:18:13.024 "nvme_iov_md": false 00:18:13.024 }, 00:18:13.024 "memory_domains": [ 00:18:13.024 { 00:18:13.024 "dma_device_id": "system", 00:18:13.024 "dma_device_type": 1 00:18:13.024 }, 00:18:13.024 { 00:18:13.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.024 "dma_device_type": 2 00:18:13.024 } 00:18:13.024 ], 00:18:13.024 "driver_specific": { 00:18:13.024 "passthru": { 00:18:13.024 "name": "pt4", 00:18:13.024 "base_bdev_name": "malloc4" 00:18:13.024 } 00:18:13.024 } 00:18:13.024 }' 00:18:13.024 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.024 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.024 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:13.024 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.024 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.024 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:13.024 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.024 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.024 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:13.024 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.283 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.283 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:13.283 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:13.283 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:13.283 [2024-07-24 18:21:21.849590] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.283 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b0549f93-d576-4e06-af56-c07111667e01 00:18:13.283 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z b0549f93-d576-4e06-af56-c07111667e01 ']' 00:18:13.283 18:21:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:13.542 [2024-07-24 18:21:22.021839] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.542 [2024-07-24 18:21:22.021852] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.542 [2024-07-24 18:21:22.021887] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.542 [2024-07-24 18:21:22.021947] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.542 [2024-07-24 18:21:22.021955] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1736c70 name raid_bdev1, state offline 00:18:13.542 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.542 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:13.801 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:13.801 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:13.801 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:13.801 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:13.801 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:13.801 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:14.061 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:14.061 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:14.320 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:14.320 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:14.320 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:14.320 18:21:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:14.579 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:14.579 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:14.579 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:14.579 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:14.579 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:14.579 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.579 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:14.579 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.579 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:14.579 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.579 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:14.579 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:18:14.579 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:14.839 [2024-07-24 18:21:23.196837] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:14.839 [2024-07-24 18:21:23.197784] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:14.839 [2024-07-24 18:21:23.197814] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:14.839 [2024-07-24 18:21:23.197835] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:14.839 [2024-07-24 18:21:23.197869] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:14.839 [2024-07-24 18:21:23.197895] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:14.839 [2024-07-24 18:21:23.197910] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:14.839 [2024-07-24 18:21:23.197929] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:14.839 [2024-07-24 18:21:23.197940] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.839 [2024-07-24 18:21:23.197947] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18e2730 name raid_bdev1, state configuring 00:18:14.839 request: 00:18:14.839 { 00:18:14.839 "name": "raid_bdev1", 00:18:14.839 "raid_level": "raid1", 00:18:14.839 "base_bdevs": [ 00:18:14.839 "malloc1", 00:18:14.839 "malloc2", 00:18:14.839 "malloc3", 00:18:14.839 "malloc4" 00:18:14.839 ], 00:18:14.839 "superblock": false, 00:18:14.839 "method": "bdev_raid_create", 00:18:14.839 "req_id": 1 00:18:14.839 } 00:18:14.839 Got JSON-RPC error response 00:18:14.839 response: 00:18:14.839 { 00:18:14.839 "code": -17, 00:18:14.839 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:14.839 } 00:18:14.839 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:14.839 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:14.839 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:14.839 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:14.839 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:14.839 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.839 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:14.839 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:14.839 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:15.098 [2024-07-24 18:21:23.541692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:15.098 [2024-07-24 18:21:23.541716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.098 [2024-07-24 18:21:23.541730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x173eee0 00:18:15.098 [2024-07-24 18:21:23.541737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.098 [2024-07-24 18:21:23.542840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.098 [2024-07-24 18:21:23.542862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:15.098 [2024-07-24 18:21:23.542906] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:15.098 [2024-07-24 18:21:23.542928] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:15.098 pt1 00:18:15.098 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:15.098 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:15.098 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:15.098 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:15.098 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:15.098 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:15.098 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.098 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.098 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.098 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.098 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.098 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.357 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.357 "name": "raid_bdev1", 00:18:15.357 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:15.357 "strip_size_kb": 0, 00:18:15.357 "state": "configuring", 00:18:15.357 "raid_level": "raid1", 00:18:15.357 "superblock": true, 00:18:15.357 "num_base_bdevs": 4, 00:18:15.357 "num_base_bdevs_discovered": 1, 00:18:15.357 "num_base_bdevs_operational": 4, 00:18:15.357 "base_bdevs_list": [ 00:18:15.357 { 00:18:15.357 "name": "pt1", 00:18:15.357 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:15.357 "is_configured": true, 00:18:15.357 "data_offset": 2048, 00:18:15.357 "data_size": 63488 00:18:15.357 }, 00:18:15.357 { 00:18:15.357 "name": null, 00:18:15.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.357 "is_configured": false, 00:18:15.357 "data_offset": 2048, 00:18:15.357 "data_size": 63488 00:18:15.357 }, 00:18:15.357 { 00:18:15.357 "name": null, 00:18:15.357 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:15.357 "is_configured": false, 00:18:15.357 "data_offset": 2048, 00:18:15.357 "data_size": 63488 00:18:15.357 }, 00:18:15.357 { 00:18:15.357 "name": null, 00:18:15.357 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:15.357 "is_configured": false, 00:18:15.357 "data_offset": 2048, 00:18:15.357 "data_size": 63488 00:18:15.357 } 00:18:15.357 ] 00:18:15.357 }' 00:18:15.357 18:21:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.357 18:21:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.932 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:18:15.932 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:15.932 [2024-07-24 18:21:24.375850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:15.932 [2024-07-24 18:21:24.375888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.932 [2024-07-24 18:21:24.375902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18d8080 00:18:15.932 [2024-07-24 18:21:24.375910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.932 [2024-07-24 18:21:24.376154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.932 [2024-07-24 18:21:24.376168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:15.932 [2024-07-24 18:21:24.376211] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:15.932 [2024-07-24 18:21:24.376225] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:15.932 pt2 00:18:15.932 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:16.191 [2024-07-24 18:21:24.544296] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:16.191 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:16.191 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:16.191 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:16.191 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:16.191 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:16.192 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:16.192 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.192 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.192 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.192 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.192 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.192 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.192 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.192 "name": "raid_bdev1", 00:18:16.192 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:16.192 "strip_size_kb": 0, 00:18:16.192 "state": "configuring", 00:18:16.192 "raid_level": "raid1", 00:18:16.192 "superblock": true, 00:18:16.192 "num_base_bdevs": 4, 00:18:16.192 "num_base_bdevs_discovered": 1, 00:18:16.192 "num_base_bdevs_operational": 4, 00:18:16.192 "base_bdevs_list": [ 00:18:16.192 { 00:18:16.192 "name": "pt1", 00:18:16.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:16.192 "is_configured": true, 00:18:16.192 "data_offset": 2048, 00:18:16.192 "data_size": 63488 00:18:16.192 }, 00:18:16.192 { 00:18:16.192 "name": null, 00:18:16.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.192 "is_configured": false, 00:18:16.192 "data_offset": 2048, 00:18:16.192 "data_size": 63488 00:18:16.192 }, 00:18:16.192 { 00:18:16.192 "name": null, 00:18:16.192 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:16.192 "is_configured": false, 00:18:16.192 "data_offset": 2048, 00:18:16.192 "data_size": 63488 00:18:16.192 }, 00:18:16.192 { 00:18:16.192 "name": null, 00:18:16.192 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:16.192 "is_configured": false, 00:18:16.192 "data_offset": 2048, 00:18:16.192 "data_size": 63488 00:18:16.192 } 00:18:16.192 ] 00:18:16.192 }' 00:18:16.192 18:21:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.192 18:21:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.760 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:16.760 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:16.760 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:17.019 [2024-07-24 18:21:25.366396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:17.019 [2024-07-24 18:21:25.366432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.019 [2024-07-24 18:21:25.366445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17357a0 00:18:17.019 [2024-07-24 18:21:25.366453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.019 [2024-07-24 18:21:25.366701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.019 [2024-07-24 18:21:25.366714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:17.019 [2024-07-24 18:21:25.366759] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:17.019 [2024-07-24 18:21:25.366771] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:17.019 pt2 00:18:17.019 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:17.019 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:17.019 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:17.019 [2024-07-24 18:21:25.538844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:17.019 [2024-07-24 18:21:25.538864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.019 [2024-07-24 18:21:25.538875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1738010 00:18:17.019 [2024-07-24 18:21:25.538882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.019 [2024-07-24 18:21:25.539062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.019 [2024-07-24 18:21:25.539074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:17.019 [2024-07-24 18:21:25.539105] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:17.019 [2024-07-24 18:21:25.539116] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:17.019 pt3 00:18:17.019 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:17.019 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:17.019 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:17.278 [2024-07-24 18:21:25.703269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:17.278 [2024-07-24 18:21:25.703294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.279 [2024-07-24 18:21:25.703308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17392c0 00:18:17.279 [2024-07-24 18:21:25.703315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.279 [2024-07-24 18:21:25.703485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.279 [2024-07-24 18:21:25.703496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:17.279 [2024-07-24 18:21:25.703526] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:17.279 [2024-07-24 18:21:25.703536] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:17.279 [2024-07-24 18:21:25.703608] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1735ec0 00:18:17.279 [2024-07-24 18:21:25.703614] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:17.279 [2024-07-24 18:21:25.703722] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x173b6e0 00:18:17.279 [2024-07-24 18:21:25.703805] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1735ec0 00:18:17.279 [2024-07-24 18:21:25.703811] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1735ec0 00:18:17.279 [2024-07-24 18:21:25.703871] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.279 pt4 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.279 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.538 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.538 "name": "raid_bdev1", 00:18:17.538 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:17.538 "strip_size_kb": 0, 00:18:17.538 "state": "online", 00:18:17.538 "raid_level": "raid1", 00:18:17.538 "superblock": true, 00:18:17.538 "num_base_bdevs": 4, 00:18:17.538 "num_base_bdevs_discovered": 4, 00:18:17.538 "num_base_bdevs_operational": 4, 00:18:17.538 "base_bdevs_list": [ 00:18:17.538 { 00:18:17.538 "name": "pt1", 00:18:17.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:17.538 "is_configured": true, 00:18:17.538 "data_offset": 2048, 00:18:17.538 "data_size": 63488 00:18:17.538 }, 00:18:17.538 { 00:18:17.538 "name": "pt2", 00:18:17.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.538 "is_configured": true, 00:18:17.538 "data_offset": 2048, 00:18:17.538 "data_size": 63488 00:18:17.538 }, 00:18:17.538 { 00:18:17.538 "name": "pt3", 00:18:17.538 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:17.538 "is_configured": true, 00:18:17.538 "data_offset": 2048, 00:18:17.538 "data_size": 63488 00:18:17.538 }, 00:18:17.538 { 00:18:17.538 "name": "pt4", 00:18:17.538 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:17.538 "is_configured": true, 00:18:17.538 "data_offset": 2048, 00:18:17.538 "data_size": 63488 00:18:17.538 } 00:18:17.538 ] 00:18:17.538 }' 00:18:17.538 18:21:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.538 18:21:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.857 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:17.857 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:17.857 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:17.857 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:17.857 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:17.857 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:17.857 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:17.857 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:18.117 [2024-07-24 18:21:26.545622] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.117 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:18.117 "name": "raid_bdev1", 00:18:18.117 "aliases": [ 00:18:18.117 "b0549f93-d576-4e06-af56-c07111667e01" 00:18:18.117 ], 00:18:18.117 "product_name": "Raid Volume", 00:18:18.117 "block_size": 512, 00:18:18.117 "num_blocks": 63488, 00:18:18.117 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:18.117 "assigned_rate_limits": { 00:18:18.117 "rw_ios_per_sec": 0, 00:18:18.117 "rw_mbytes_per_sec": 0, 00:18:18.117 "r_mbytes_per_sec": 0, 00:18:18.117 "w_mbytes_per_sec": 0 00:18:18.117 }, 00:18:18.117 "claimed": false, 00:18:18.117 "zoned": false, 00:18:18.117 "supported_io_types": { 00:18:18.117 "read": true, 00:18:18.117 "write": true, 00:18:18.117 "unmap": false, 00:18:18.117 "flush": false, 00:18:18.117 "reset": true, 00:18:18.117 "nvme_admin": false, 00:18:18.117 "nvme_io": false, 00:18:18.117 "nvme_io_md": false, 00:18:18.117 "write_zeroes": true, 00:18:18.117 "zcopy": false, 00:18:18.117 "get_zone_info": false, 00:18:18.117 "zone_management": false, 00:18:18.117 "zone_append": false, 00:18:18.117 "compare": false, 00:18:18.117 "compare_and_write": false, 00:18:18.117 "abort": false, 00:18:18.117 "seek_hole": false, 00:18:18.117 "seek_data": false, 00:18:18.117 "copy": false, 00:18:18.117 "nvme_iov_md": false 00:18:18.117 }, 00:18:18.117 "memory_domains": [ 00:18:18.117 { 00:18:18.117 "dma_device_id": "system", 00:18:18.117 "dma_device_type": 1 00:18:18.117 }, 00:18:18.117 { 00:18:18.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.117 "dma_device_type": 2 00:18:18.117 }, 00:18:18.117 { 00:18:18.117 "dma_device_id": "system", 00:18:18.117 "dma_device_type": 1 00:18:18.117 }, 00:18:18.117 { 00:18:18.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.117 "dma_device_type": 2 00:18:18.117 }, 00:18:18.117 { 00:18:18.117 "dma_device_id": "system", 00:18:18.117 "dma_device_type": 1 00:18:18.117 }, 00:18:18.117 { 00:18:18.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.117 "dma_device_type": 2 00:18:18.117 }, 00:18:18.118 { 00:18:18.118 "dma_device_id": "system", 00:18:18.118 "dma_device_type": 1 00:18:18.118 }, 00:18:18.118 { 00:18:18.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.118 "dma_device_type": 2 00:18:18.118 } 00:18:18.118 ], 00:18:18.118 "driver_specific": { 00:18:18.118 "raid": { 00:18:18.118 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:18.118 "strip_size_kb": 0, 00:18:18.118 "state": "online", 00:18:18.118 "raid_level": "raid1", 00:18:18.118 "superblock": true, 00:18:18.118 "num_base_bdevs": 4, 00:18:18.118 "num_base_bdevs_discovered": 4, 00:18:18.118 "num_base_bdevs_operational": 4, 00:18:18.118 "base_bdevs_list": [ 00:18:18.118 { 00:18:18.118 "name": "pt1", 00:18:18.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.118 "is_configured": true, 00:18:18.118 "data_offset": 2048, 00:18:18.118 "data_size": 63488 00:18:18.118 }, 00:18:18.118 { 00:18:18.118 "name": "pt2", 00:18:18.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.118 "is_configured": true, 00:18:18.118 "data_offset": 2048, 00:18:18.118 "data_size": 63488 00:18:18.118 }, 00:18:18.118 { 00:18:18.118 "name": "pt3", 00:18:18.118 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:18.118 "is_configured": true, 00:18:18.118 "data_offset": 2048, 00:18:18.118 "data_size": 63488 00:18:18.118 }, 00:18:18.118 { 00:18:18.118 "name": "pt4", 00:18:18.118 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:18.118 "is_configured": true, 00:18:18.118 "data_offset": 2048, 00:18:18.118 "data_size": 63488 00:18:18.118 } 00:18:18.118 ] 00:18:18.118 } 00:18:18.118 } 00:18:18.118 }' 00:18:18.118 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:18.118 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:18.118 pt2 00:18:18.118 pt3 00:18:18.118 pt4' 00:18:18.118 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:18.118 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:18.118 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:18.377 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:18.377 "name": "pt1", 00:18:18.377 "aliases": [ 00:18:18.377 "00000000-0000-0000-0000-000000000001" 00:18:18.377 ], 00:18:18.377 "product_name": "passthru", 00:18:18.377 "block_size": 512, 00:18:18.377 "num_blocks": 65536, 00:18:18.377 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.377 "assigned_rate_limits": { 00:18:18.377 "rw_ios_per_sec": 0, 00:18:18.377 "rw_mbytes_per_sec": 0, 00:18:18.377 "r_mbytes_per_sec": 0, 00:18:18.377 "w_mbytes_per_sec": 0 00:18:18.377 }, 00:18:18.377 "claimed": true, 00:18:18.377 "claim_type": "exclusive_write", 00:18:18.377 "zoned": false, 00:18:18.377 "supported_io_types": { 00:18:18.377 "read": true, 00:18:18.377 "write": true, 00:18:18.377 "unmap": true, 00:18:18.377 "flush": true, 00:18:18.377 "reset": true, 00:18:18.377 "nvme_admin": false, 00:18:18.377 "nvme_io": false, 00:18:18.377 "nvme_io_md": false, 00:18:18.377 "write_zeroes": true, 00:18:18.377 "zcopy": true, 00:18:18.377 "get_zone_info": false, 00:18:18.377 "zone_management": false, 00:18:18.377 "zone_append": false, 00:18:18.377 "compare": false, 00:18:18.378 "compare_and_write": false, 00:18:18.378 "abort": true, 00:18:18.378 "seek_hole": false, 00:18:18.378 "seek_data": false, 00:18:18.378 "copy": true, 00:18:18.378 "nvme_iov_md": false 00:18:18.378 }, 00:18:18.378 "memory_domains": [ 00:18:18.378 { 00:18:18.378 "dma_device_id": "system", 00:18:18.378 "dma_device_type": 1 00:18:18.378 }, 00:18:18.378 { 00:18:18.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.378 "dma_device_type": 2 00:18:18.378 } 00:18:18.378 ], 00:18:18.378 "driver_specific": { 00:18:18.378 "passthru": { 00:18:18.378 "name": "pt1", 00:18:18.378 "base_bdev_name": "malloc1" 00:18:18.378 } 00:18:18.378 } 00:18:18.378 }' 00:18:18.378 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.378 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.378 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:18.378 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.378 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.378 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:18.378 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.637 18:21:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.637 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:18.637 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.637 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.637 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:18.637 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:18.637 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:18.637 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:18.896 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:18.896 "name": "pt2", 00:18:18.896 "aliases": [ 00:18:18.896 "00000000-0000-0000-0000-000000000002" 00:18:18.896 ], 00:18:18.896 "product_name": "passthru", 00:18:18.896 "block_size": 512, 00:18:18.896 "num_blocks": 65536, 00:18:18.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.896 "assigned_rate_limits": { 00:18:18.896 "rw_ios_per_sec": 0, 00:18:18.896 "rw_mbytes_per_sec": 0, 00:18:18.896 "r_mbytes_per_sec": 0, 00:18:18.896 "w_mbytes_per_sec": 0 00:18:18.896 }, 00:18:18.896 "claimed": true, 00:18:18.896 "claim_type": "exclusive_write", 00:18:18.896 "zoned": false, 00:18:18.896 "supported_io_types": { 00:18:18.896 "read": true, 00:18:18.896 "write": true, 00:18:18.896 "unmap": true, 00:18:18.896 "flush": true, 00:18:18.896 "reset": true, 00:18:18.896 "nvme_admin": false, 00:18:18.896 "nvme_io": false, 00:18:18.896 "nvme_io_md": false, 00:18:18.896 "write_zeroes": true, 00:18:18.896 "zcopy": true, 00:18:18.896 "get_zone_info": false, 00:18:18.896 "zone_management": false, 00:18:18.896 "zone_append": false, 00:18:18.896 "compare": false, 00:18:18.896 "compare_and_write": false, 00:18:18.896 "abort": true, 00:18:18.896 "seek_hole": false, 00:18:18.896 "seek_data": false, 00:18:18.896 "copy": true, 00:18:18.896 "nvme_iov_md": false 00:18:18.896 }, 00:18:18.896 "memory_domains": [ 00:18:18.896 { 00:18:18.896 "dma_device_id": "system", 00:18:18.896 "dma_device_type": 1 00:18:18.896 }, 00:18:18.896 { 00:18:18.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.896 "dma_device_type": 2 00:18:18.896 } 00:18:18.896 ], 00:18:18.896 "driver_specific": { 00:18:18.896 "passthru": { 00:18:18.896 "name": "pt2", 00:18:18.896 "base_bdev_name": "malloc2" 00:18:18.896 } 00:18:18.896 } 00:18:18.896 }' 00:18:18.896 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.896 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.896 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:18.896 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.896 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.896 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:18.896 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.896 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.896 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:18.896 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.155 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.155 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:19.155 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:19.155 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:19.155 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:19.155 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:19.155 "name": "pt3", 00:18:19.155 "aliases": [ 00:18:19.155 "00000000-0000-0000-0000-000000000003" 00:18:19.155 ], 00:18:19.155 "product_name": "passthru", 00:18:19.155 "block_size": 512, 00:18:19.155 "num_blocks": 65536, 00:18:19.155 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:19.155 "assigned_rate_limits": { 00:18:19.155 "rw_ios_per_sec": 0, 00:18:19.155 "rw_mbytes_per_sec": 0, 00:18:19.155 "r_mbytes_per_sec": 0, 00:18:19.155 "w_mbytes_per_sec": 0 00:18:19.155 }, 00:18:19.155 "claimed": true, 00:18:19.155 "claim_type": "exclusive_write", 00:18:19.155 "zoned": false, 00:18:19.155 "supported_io_types": { 00:18:19.155 "read": true, 00:18:19.155 "write": true, 00:18:19.155 "unmap": true, 00:18:19.155 "flush": true, 00:18:19.155 "reset": true, 00:18:19.155 "nvme_admin": false, 00:18:19.155 "nvme_io": false, 00:18:19.155 "nvme_io_md": false, 00:18:19.155 "write_zeroes": true, 00:18:19.155 "zcopy": true, 00:18:19.155 "get_zone_info": false, 00:18:19.155 "zone_management": false, 00:18:19.155 "zone_append": false, 00:18:19.155 "compare": false, 00:18:19.155 "compare_and_write": false, 00:18:19.155 "abort": true, 00:18:19.155 "seek_hole": false, 00:18:19.155 "seek_data": false, 00:18:19.155 "copy": true, 00:18:19.155 "nvme_iov_md": false 00:18:19.155 }, 00:18:19.155 "memory_domains": [ 00:18:19.155 { 00:18:19.155 "dma_device_id": "system", 00:18:19.155 "dma_device_type": 1 00:18:19.155 }, 00:18:19.155 { 00:18:19.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.155 "dma_device_type": 2 00:18:19.155 } 00:18:19.155 ], 00:18:19.155 "driver_specific": { 00:18:19.155 "passthru": { 00:18:19.155 "name": "pt3", 00:18:19.155 "base_bdev_name": "malloc3" 00:18:19.155 } 00:18:19.155 } 00:18:19.155 }' 00:18:19.155 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.414 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.414 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:19.414 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.414 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.414 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:19.414 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.414 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.414 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.414 18:21:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.672 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.672 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:19.672 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:19.672 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:19.672 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:19.672 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:19.672 "name": "pt4", 00:18:19.672 "aliases": [ 00:18:19.672 "00000000-0000-0000-0000-000000000004" 00:18:19.672 ], 00:18:19.672 "product_name": "passthru", 00:18:19.672 "block_size": 512, 00:18:19.672 "num_blocks": 65536, 00:18:19.672 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:19.672 "assigned_rate_limits": { 00:18:19.672 "rw_ios_per_sec": 0, 00:18:19.672 "rw_mbytes_per_sec": 0, 00:18:19.672 "r_mbytes_per_sec": 0, 00:18:19.672 "w_mbytes_per_sec": 0 00:18:19.672 }, 00:18:19.672 "claimed": true, 00:18:19.672 "claim_type": "exclusive_write", 00:18:19.672 "zoned": false, 00:18:19.672 "supported_io_types": { 00:18:19.672 "read": true, 00:18:19.672 "write": true, 00:18:19.673 "unmap": true, 00:18:19.673 "flush": true, 00:18:19.673 "reset": true, 00:18:19.673 "nvme_admin": false, 00:18:19.673 "nvme_io": false, 00:18:19.673 "nvme_io_md": false, 00:18:19.673 "write_zeroes": true, 00:18:19.673 "zcopy": true, 00:18:19.673 "get_zone_info": false, 00:18:19.673 "zone_management": false, 00:18:19.673 "zone_append": false, 00:18:19.673 "compare": false, 00:18:19.673 "compare_and_write": false, 00:18:19.673 "abort": true, 00:18:19.673 "seek_hole": false, 00:18:19.673 "seek_data": false, 00:18:19.673 "copy": true, 00:18:19.673 "nvme_iov_md": false 00:18:19.673 }, 00:18:19.673 "memory_domains": [ 00:18:19.673 { 00:18:19.673 "dma_device_id": "system", 00:18:19.673 "dma_device_type": 1 00:18:19.673 }, 00:18:19.673 { 00:18:19.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.673 "dma_device_type": 2 00:18:19.673 } 00:18:19.673 ], 00:18:19.673 "driver_specific": { 00:18:19.673 "passthru": { 00:18:19.673 "name": "pt4", 00:18:19.673 "base_bdev_name": "malloc4" 00:18:19.673 } 00:18:19.673 } 00:18:19.673 }' 00:18:19.673 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.931 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.931 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:19.931 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.931 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.931 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:19.931 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.931 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.931 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.931 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.931 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.190 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.190 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:20.190 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:20.190 [2024-07-24 18:21:28.691155] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.190 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' b0549f93-d576-4e06-af56-c07111667e01 '!=' b0549f93-d576-4e06-af56-c07111667e01 ']' 00:18:20.190 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:20.190 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:20.190 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:20.190 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:20.449 [2024-07-24 18:21:28.867446] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:20.449 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:20.449 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:20.449 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:20.449 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:20.449 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:20.449 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:20.449 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.449 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.449 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.449 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.449 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.449 18:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.764 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.764 "name": "raid_bdev1", 00:18:20.764 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:20.764 "strip_size_kb": 0, 00:18:20.764 "state": "online", 00:18:20.764 "raid_level": "raid1", 00:18:20.764 "superblock": true, 00:18:20.764 "num_base_bdevs": 4, 00:18:20.764 "num_base_bdevs_discovered": 3, 00:18:20.764 "num_base_bdevs_operational": 3, 00:18:20.764 "base_bdevs_list": [ 00:18:20.764 { 00:18:20.764 "name": null, 00:18:20.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.764 "is_configured": false, 00:18:20.764 "data_offset": 2048, 00:18:20.764 "data_size": 63488 00:18:20.764 }, 00:18:20.764 { 00:18:20.764 "name": "pt2", 00:18:20.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.764 "is_configured": true, 00:18:20.764 "data_offset": 2048, 00:18:20.764 "data_size": 63488 00:18:20.764 }, 00:18:20.764 { 00:18:20.764 "name": "pt3", 00:18:20.764 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:20.764 "is_configured": true, 00:18:20.764 "data_offset": 2048, 00:18:20.764 "data_size": 63488 00:18:20.764 }, 00:18:20.764 { 00:18:20.764 "name": "pt4", 00:18:20.764 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:20.764 "is_configured": true, 00:18:20.764 "data_offset": 2048, 00:18:20.764 "data_size": 63488 00:18:20.764 } 00:18:20.764 ] 00:18:20.764 }' 00:18:20.764 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.764 18:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.023 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:21.282 [2024-07-24 18:21:29.673512] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.282 [2024-07-24 18:21:29.673530] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.282 [2024-07-24 18:21:29.673565] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.282 [2024-07-24 18:21:29.673607] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.282 [2024-07-24 18:21:29.673614] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1735ec0 name raid_bdev1, state offline 00:18:21.282 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.282 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:21.282 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:21.282 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:21.282 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:21.282 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:21.282 18:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:21.542 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:21.542 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:21.542 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:21.802 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:21.802 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:21.802 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:21.802 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:21.802 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:21.802 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:21.802 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:21.802 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:22.062 [2024-07-24 18:21:30.499612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:22.062 [2024-07-24 18:21:30.499647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.062 [2024-07-24 18:21:30.499659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18d8e40 00:18:22.062 [2024-07-24 18:21:30.499667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.062 [2024-07-24 18:21:30.500808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.062 [2024-07-24 18:21:30.500831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:22.062 [2024-07-24 18:21:30.500877] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:22.062 [2024-07-24 18:21:30.500895] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:22.062 pt2 00:18:22.062 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:22.062 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:22.062 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:22.062 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:22.062 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:22.062 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:22.062 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.062 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.062 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.062 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.062 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.062 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.321 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.321 "name": "raid_bdev1", 00:18:22.321 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:22.321 "strip_size_kb": 0, 00:18:22.321 "state": "configuring", 00:18:22.321 "raid_level": "raid1", 00:18:22.321 "superblock": true, 00:18:22.321 "num_base_bdevs": 4, 00:18:22.321 "num_base_bdevs_discovered": 1, 00:18:22.321 "num_base_bdevs_operational": 3, 00:18:22.321 "base_bdevs_list": [ 00:18:22.321 { 00:18:22.321 "name": null, 00:18:22.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.321 "is_configured": false, 00:18:22.321 "data_offset": 2048, 00:18:22.321 "data_size": 63488 00:18:22.321 }, 00:18:22.321 { 00:18:22.321 "name": "pt2", 00:18:22.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.321 "is_configured": true, 00:18:22.321 "data_offset": 2048, 00:18:22.321 "data_size": 63488 00:18:22.321 }, 00:18:22.321 { 00:18:22.321 "name": null, 00:18:22.321 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:22.321 "is_configured": false, 00:18:22.321 "data_offset": 2048, 00:18:22.321 "data_size": 63488 00:18:22.321 }, 00:18:22.322 { 00:18:22.322 "name": null, 00:18:22.322 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:22.322 "is_configured": false, 00:18:22.322 "data_offset": 2048, 00:18:22.322 "data_size": 63488 00:18:22.322 } 00:18:22.322 ] 00:18:22.322 }' 00:18:22.322 18:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.322 18:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.580 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:18:22.580 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:22.580 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:22.840 [2024-07-24 18:21:31.301688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:22.840 [2024-07-24 18:21:31.301728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.840 [2024-07-24 18:21:31.301743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x173f150 00:18:22.840 [2024-07-24 18:21:31.301752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.840 [2024-07-24 18:21:31.301997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.840 [2024-07-24 18:21:31.302010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:22.840 [2024-07-24 18:21:31.302055] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:22.840 [2024-07-24 18:21:31.302067] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:22.840 pt3 00:18:22.840 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:22.840 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:22.840 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:22.840 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:22.840 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:22.840 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:22.840 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.840 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.840 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.840 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.840 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.840 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.099 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.099 "name": "raid_bdev1", 00:18:23.099 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:23.099 "strip_size_kb": 0, 00:18:23.099 "state": "configuring", 00:18:23.099 "raid_level": "raid1", 00:18:23.099 "superblock": true, 00:18:23.099 "num_base_bdevs": 4, 00:18:23.099 "num_base_bdevs_discovered": 2, 00:18:23.099 "num_base_bdevs_operational": 3, 00:18:23.099 "base_bdevs_list": [ 00:18:23.099 { 00:18:23.099 "name": null, 00:18:23.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.099 "is_configured": false, 00:18:23.099 "data_offset": 2048, 00:18:23.099 "data_size": 63488 00:18:23.099 }, 00:18:23.099 { 00:18:23.099 "name": "pt2", 00:18:23.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.099 "is_configured": true, 00:18:23.099 "data_offset": 2048, 00:18:23.099 "data_size": 63488 00:18:23.099 }, 00:18:23.099 { 00:18:23.099 "name": "pt3", 00:18:23.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:23.099 "is_configured": true, 00:18:23.099 "data_offset": 2048, 00:18:23.099 "data_size": 63488 00:18:23.099 }, 00:18:23.099 { 00:18:23.099 "name": null, 00:18:23.099 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:23.099 "is_configured": false, 00:18:23.099 "data_offset": 2048, 00:18:23.099 "data_size": 63488 00:18:23.099 } 00:18:23.099 ] 00:18:23.099 }' 00:18:23.099 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.099 18:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.668 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:18:23.668 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:23.668 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:18:23.668 18:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:23.668 [2024-07-24 18:21:32.111776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:23.668 [2024-07-24 18:21:32.111819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.668 [2024-07-24 18:21:32.111833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18e1c60 00:18:23.668 [2024-07-24 18:21:32.111841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.668 [2024-07-24 18:21:32.112085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.668 [2024-07-24 18:21:32.112098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:23.668 [2024-07-24 18:21:32.112146] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:23.668 [2024-07-24 18:21:32.112160] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:23.668 [2024-07-24 18:21:32.112234] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x17365e0 00:18:23.668 [2024-07-24 18:21:32.112241] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:23.668 [2024-07-24 18:21:32.112350] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x173ad40 00:18:23.668 [2024-07-24 18:21:32.112434] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17365e0 00:18:23.668 [2024-07-24 18:21:32.112441] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x17365e0 00:18:23.668 [2024-07-24 18:21:32.112502] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.668 pt4 00:18:23.668 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:23.668 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:23.668 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:23.668 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:23.668 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:23.668 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:23.668 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.668 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.668 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.668 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.668 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.668 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.927 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.927 "name": "raid_bdev1", 00:18:23.927 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:23.927 "strip_size_kb": 0, 00:18:23.927 "state": "online", 00:18:23.927 "raid_level": "raid1", 00:18:23.927 "superblock": true, 00:18:23.927 "num_base_bdevs": 4, 00:18:23.927 "num_base_bdevs_discovered": 3, 00:18:23.927 "num_base_bdevs_operational": 3, 00:18:23.927 "base_bdevs_list": [ 00:18:23.927 { 00:18:23.927 "name": null, 00:18:23.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.927 "is_configured": false, 00:18:23.927 "data_offset": 2048, 00:18:23.927 "data_size": 63488 00:18:23.927 }, 00:18:23.927 { 00:18:23.927 "name": "pt2", 00:18:23.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.927 "is_configured": true, 00:18:23.927 "data_offset": 2048, 00:18:23.927 "data_size": 63488 00:18:23.927 }, 00:18:23.927 { 00:18:23.927 "name": "pt3", 00:18:23.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:23.927 "is_configured": true, 00:18:23.927 "data_offset": 2048, 00:18:23.927 "data_size": 63488 00:18:23.927 }, 00:18:23.927 { 00:18:23.927 "name": "pt4", 00:18:23.927 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:23.927 "is_configured": true, 00:18:23.927 "data_offset": 2048, 00:18:23.927 "data_size": 63488 00:18:23.927 } 00:18:23.927 ] 00:18:23.927 }' 00:18:23.927 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.927 18:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.186 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:24.445 [2024-07-24 18:21:32.929886] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.445 [2024-07-24 18:21:32.929905] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.445 [2024-07-24 18:21:32.929948] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.445 [2024-07-24 18:21:32.930000] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.445 [2024-07-24 18:21:32.930008] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17365e0 name raid_bdev1, state offline 00:18:24.445 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.445 18:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:24.704 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:24.704 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:24.704 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:18:24.704 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:18:24.704 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:24.704 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:24.962 [2024-07-24 18:21:33.423140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:24.962 [2024-07-24 18:21:33.423175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.962 [2024-07-24 18:21:33.423186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18e1c60 00:18:24.962 [2024-07-24 18:21:33.423194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.962 [2024-07-24 18:21:33.424345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.962 [2024-07-24 18:21:33.424369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:24.962 [2024-07-24 18:21:33.424419] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:24.962 [2024-07-24 18:21:33.424437] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:24.962 [2024-07-24 18:21:33.424508] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:24.962 [2024-07-24 18:21:33.424516] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.962 [2024-07-24 18:21:33.424525] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17357a0 name raid_bdev1, state configuring 00:18:24.962 [2024-07-24 18:21:33.424540] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:24.962 [2024-07-24 18:21:33.424588] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:24.962 pt1 00:18:24.962 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:18:24.962 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:24.962 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:24.962 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:24.962 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:24.962 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:24.962 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:24.962 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.962 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.962 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.962 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.963 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.963 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.221 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.221 "name": "raid_bdev1", 00:18:25.221 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:25.221 "strip_size_kb": 0, 00:18:25.221 "state": "configuring", 00:18:25.221 "raid_level": "raid1", 00:18:25.221 "superblock": true, 00:18:25.221 "num_base_bdevs": 4, 00:18:25.221 "num_base_bdevs_discovered": 2, 00:18:25.221 "num_base_bdevs_operational": 3, 00:18:25.221 "base_bdevs_list": [ 00:18:25.221 { 00:18:25.221 "name": null, 00:18:25.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.221 "is_configured": false, 00:18:25.221 "data_offset": 2048, 00:18:25.221 "data_size": 63488 00:18:25.221 }, 00:18:25.221 { 00:18:25.221 "name": "pt2", 00:18:25.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.221 "is_configured": true, 00:18:25.221 "data_offset": 2048, 00:18:25.221 "data_size": 63488 00:18:25.221 }, 00:18:25.221 { 00:18:25.221 "name": "pt3", 00:18:25.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:25.221 "is_configured": true, 00:18:25.221 "data_offset": 2048, 00:18:25.221 "data_size": 63488 00:18:25.221 }, 00:18:25.221 { 00:18:25.221 "name": null, 00:18:25.221 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:25.221 "is_configured": false, 00:18:25.221 "data_offset": 2048, 00:18:25.221 "data_size": 63488 00:18:25.221 } 00:18:25.221 ] 00:18:25.221 }' 00:18:25.221 18:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.221 18:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.480 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:18:25.480 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:25.740 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:18:25.740 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:26.000 [2024-07-24 18:21:34.373597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:26.000 [2024-07-24 18:21:34.373642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.000 [2024-07-24 18:21:34.373656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1735a50 00:18:26.000 [2024-07-24 18:21:34.373664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.000 [2024-07-24 18:21:34.373911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.000 [2024-07-24 18:21:34.373923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:26.000 [2024-07-24 18:21:34.373968] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:26.000 [2024-07-24 18:21:34.373980] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:26.000 [2024-07-24 18:21:34.374054] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1739280 00:18:26.000 [2024-07-24 18:21:34.374061] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:26.000 [2024-07-24 18:21:34.374173] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18d90d0 00:18:26.000 [2024-07-24 18:21:34.374258] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1739280 00:18:26.000 [2024-07-24 18:21:34.374263] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1739280 00:18:26.000 [2024-07-24 18:21:34.374339] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.000 pt4 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:26.000 "name": "raid_bdev1", 00:18:26.000 "uuid": "b0549f93-d576-4e06-af56-c07111667e01", 00:18:26.000 "strip_size_kb": 0, 00:18:26.000 "state": "online", 00:18:26.000 "raid_level": "raid1", 00:18:26.000 "superblock": true, 00:18:26.000 "num_base_bdevs": 4, 00:18:26.000 "num_base_bdevs_discovered": 3, 00:18:26.000 "num_base_bdevs_operational": 3, 00:18:26.000 "base_bdevs_list": [ 00:18:26.000 { 00:18:26.000 "name": null, 00:18:26.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.000 "is_configured": false, 00:18:26.000 "data_offset": 2048, 00:18:26.000 "data_size": 63488 00:18:26.000 }, 00:18:26.000 { 00:18:26.000 "name": "pt2", 00:18:26.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.000 "is_configured": true, 00:18:26.000 "data_offset": 2048, 00:18:26.000 "data_size": 63488 00:18:26.000 }, 00:18:26.000 { 00:18:26.000 "name": "pt3", 00:18:26.000 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:26.000 "is_configured": true, 00:18:26.000 "data_offset": 2048, 00:18:26.000 "data_size": 63488 00:18:26.000 }, 00:18:26.000 { 00:18:26.000 "name": "pt4", 00:18:26.000 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:26.000 "is_configured": true, 00:18:26.000 "data_offset": 2048, 00:18:26.000 "data_size": 63488 00:18:26.000 } 00:18:26.000 ] 00:18:26.000 }' 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:26.000 18:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.566 18:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:26.566 18:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:26.825 18:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:26.825 18:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:26.825 18:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:26.825 [2024-07-24 18:21:35.364324] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.825 18:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' b0549f93-d576-4e06-af56-c07111667e01 '!=' b0549f93-d576-4e06-af56-c07111667e01 ']' 00:18:26.825 18:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 2251214 00:18:26.825 18:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 2251214 ']' 00:18:26.825 18:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 2251214 00:18:26.825 18:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:18:26.825 18:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:26.825 18:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2251214 00:18:27.085 18:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:27.085 18:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:27.085 18:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2251214' 00:18:27.085 killing process with pid 2251214 00:18:27.085 18:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 2251214 00:18:27.085 [2024-07-24 18:21:35.422799] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:27.085 [2024-07-24 18:21:35.422843] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.085 [2024-07-24 18:21:35.422894] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.085 [2024-07-24 18:21:35.422902] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1739280 name raid_bdev1, state offline 00:18:27.085 18:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 2251214 00:18:27.085 [2024-07-24 18:21:35.452301] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:27.085 18:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:27.085 00:18:27.085 real 0m19.113s 00:18:27.085 user 0m34.811s 00:18:27.085 sys 0m3.654s 00:18:27.085 18:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:27.085 18:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.085 ************************************ 00:18:27.085 END TEST raid_superblock_test 00:18:27.085 ************************************ 00:18:27.085 18:21:35 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:18:27.085 18:21:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:27.085 18:21:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:27.085 18:21:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.344 ************************************ 00:18:27.344 START TEST raid_read_error_test 00:18:27.345 ************************************ 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.XxWnk15w4c 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2255008 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2255008 /var/tmp/spdk-raid.sock 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 2255008 ']' 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:27.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:27.345 18:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.345 [2024-07-24 18:21:35.775420] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:18:27.345 [2024-07-24 18:21:35.775463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255008 ] 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:01.0 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:01.1 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:01.2 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:01.3 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:01.4 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:01.5 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:01.6 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:01.7 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:02.0 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:02.1 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:02.2 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:02.3 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:02.4 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:02.5 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:02.6 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b3:02.7 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:01.0 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:01.1 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:01.2 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:01.3 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:01.4 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:01.5 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:01.6 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:01.7 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:02.0 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:02.1 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:02.2 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:02.3 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:02.4 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:02.5 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:02.6 cannot be used 00:18:27.345 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:27.345 EAL: Requested device 0000:b5:02.7 cannot be used 00:18:27.345 [2024-07-24 18:21:35.867335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.605 [2024-07-24 18:21:35.941042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.605 [2024-07-24 18:21:36.000178] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.605 [2024-07-24 18:21:36.000207] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.173 18:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:28.173 18:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:28.174 18:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:28.174 18:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:28.174 BaseBdev1_malloc 00:18:28.174 18:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:28.433 true 00:18:28.433 18:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:28.693 [2024-07-24 18:21:37.032433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:28.693 [2024-07-24 18:21:37.032463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.693 [2024-07-24 18:21:37.032476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1530ed0 00:18:28.693 [2024-07-24 18:21:37.032484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.693 [2024-07-24 18:21:37.033597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.693 [2024-07-24 18:21:37.033621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:28.693 BaseBdev1 00:18:28.693 18:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:28.693 18:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:28.693 BaseBdev2_malloc 00:18:28.693 18:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:28.952 true 00:18:28.952 18:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:29.211 [2024-07-24 18:21:37.549319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:29.211 [2024-07-24 18:21:37.549353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.211 [2024-07-24 18:21:37.549366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1535b60 00:18:29.211 [2024-07-24 18:21:37.549374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.211 [2024-07-24 18:21:37.550413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.211 [2024-07-24 18:21:37.550436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:29.211 BaseBdev2 00:18:29.211 18:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:29.211 18:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:29.211 BaseBdev3_malloc 00:18:29.211 18:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:29.471 true 00:18:29.471 18:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:29.471 [2024-07-24 18:21:38.054222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:29.471 [2024-07-24 18:21:38.054255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.471 [2024-07-24 18:21:38.054271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1536ad0 00:18:29.471 [2024-07-24 18:21:38.054279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.471 [2024-07-24 18:21:38.055261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.471 [2024-07-24 18:21:38.055283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:29.471 BaseBdev3 00:18:29.730 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:29.730 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:29.730 BaseBdev4_malloc 00:18:29.730 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:29.990 true 00:18:29.990 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:29.990 [2024-07-24 18:21:38.555032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:29.990 [2024-07-24 18:21:38.555064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.990 [2024-07-24 18:21:38.555078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1538d40 00:18:29.990 [2024-07-24 18:21:38.555086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.990 [2024-07-24 18:21:38.556099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.990 [2024-07-24 18:21:38.556121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:29.990 BaseBdev4 00:18:29.990 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:30.250 [2024-07-24 18:21:38.711461] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.250 [2024-07-24 18:21:38.712248] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:30.250 [2024-07-24 18:21:38.712294] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:30.250 [2024-07-24 18:21:38.712330] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:30.250 [2024-07-24 18:21:38.712482] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1539b10 00:18:30.250 [2024-07-24 18:21:38.712489] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:30.250 [2024-07-24 18:21:38.712610] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x153adc0 00:18:30.250 [2024-07-24 18:21:38.712715] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1539b10 00:18:30.250 [2024-07-24 18:21:38.712721] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1539b10 00:18:30.250 [2024-07-24 18:21:38.712784] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.250 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:30.250 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:30.250 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:30.250 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:30.250 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:30.250 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:30.250 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:30.250 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:30.250 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:30.250 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:30.250 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.250 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.510 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:30.510 "name": "raid_bdev1", 00:18:30.510 "uuid": "5cdf3c5a-2525-4813-b567-9fc8d738f449", 00:18:30.510 "strip_size_kb": 0, 00:18:30.510 "state": "online", 00:18:30.510 "raid_level": "raid1", 00:18:30.510 "superblock": true, 00:18:30.510 "num_base_bdevs": 4, 00:18:30.510 "num_base_bdevs_discovered": 4, 00:18:30.510 "num_base_bdevs_operational": 4, 00:18:30.510 "base_bdevs_list": [ 00:18:30.510 { 00:18:30.510 "name": "BaseBdev1", 00:18:30.510 "uuid": "06de0a71-bdda-5f74-a886-7e394ca6f109", 00:18:30.510 "is_configured": true, 00:18:30.510 "data_offset": 2048, 00:18:30.510 "data_size": 63488 00:18:30.510 }, 00:18:30.510 { 00:18:30.510 "name": "BaseBdev2", 00:18:30.510 "uuid": "802b89ee-dd81-5f3e-a2e9-196c4f705e8e", 00:18:30.510 "is_configured": true, 00:18:30.510 "data_offset": 2048, 00:18:30.510 "data_size": 63488 00:18:30.510 }, 00:18:30.510 { 00:18:30.510 "name": "BaseBdev3", 00:18:30.510 "uuid": "fd979c26-061b-5fcd-940c-b0c8ffa32acd", 00:18:30.510 "is_configured": true, 00:18:30.510 "data_offset": 2048, 00:18:30.510 "data_size": 63488 00:18:30.510 }, 00:18:30.510 { 00:18:30.510 "name": "BaseBdev4", 00:18:30.510 "uuid": "c7b071c3-6085-570e-b788-75e721a325e7", 00:18:30.510 "is_configured": true, 00:18:30.510 "data_offset": 2048, 00:18:30.510 "data_size": 63488 00:18:30.510 } 00:18:30.510 ] 00:18:30.510 }' 00:18:30.510 18:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:30.510 18:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.769 18:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:30.769 18:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:31.028 [2024-07-24 18:21:39.417474] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x153adc0 00:18:31.965 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:31.965 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:31.965 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:31.965 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:18:31.965 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:18:31.965 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:31.965 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:31.965 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:31.966 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:31.966 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:31.966 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:31.966 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.966 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.966 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.966 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.966 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.966 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.224 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:32.224 "name": "raid_bdev1", 00:18:32.224 "uuid": "5cdf3c5a-2525-4813-b567-9fc8d738f449", 00:18:32.224 "strip_size_kb": 0, 00:18:32.224 "state": "online", 00:18:32.224 "raid_level": "raid1", 00:18:32.224 "superblock": true, 00:18:32.224 "num_base_bdevs": 4, 00:18:32.224 "num_base_bdevs_discovered": 4, 00:18:32.224 "num_base_bdevs_operational": 4, 00:18:32.224 "base_bdevs_list": [ 00:18:32.224 { 00:18:32.224 "name": "BaseBdev1", 00:18:32.224 "uuid": "06de0a71-bdda-5f74-a886-7e394ca6f109", 00:18:32.224 "is_configured": true, 00:18:32.224 "data_offset": 2048, 00:18:32.224 "data_size": 63488 00:18:32.224 }, 00:18:32.224 { 00:18:32.224 "name": "BaseBdev2", 00:18:32.224 "uuid": "802b89ee-dd81-5f3e-a2e9-196c4f705e8e", 00:18:32.224 "is_configured": true, 00:18:32.224 "data_offset": 2048, 00:18:32.224 "data_size": 63488 00:18:32.224 }, 00:18:32.224 { 00:18:32.224 "name": "BaseBdev3", 00:18:32.224 "uuid": "fd979c26-061b-5fcd-940c-b0c8ffa32acd", 00:18:32.224 "is_configured": true, 00:18:32.224 "data_offset": 2048, 00:18:32.224 "data_size": 63488 00:18:32.224 }, 00:18:32.224 { 00:18:32.224 "name": "BaseBdev4", 00:18:32.224 "uuid": "c7b071c3-6085-570e-b788-75e721a325e7", 00:18:32.224 "is_configured": true, 00:18:32.224 "data_offset": 2048, 00:18:32.224 "data_size": 63488 00:18:32.224 } 00:18:32.224 ] 00:18:32.224 }' 00:18:32.224 18:21:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:32.224 18:21:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.792 18:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:32.792 [2024-07-24 18:21:41.265348] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:32.792 [2024-07-24 18:21:41.265378] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:32.792 [2024-07-24 18:21:41.267413] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.792 [2024-07-24 18:21:41.267442] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.792 [2024-07-24 18:21:41.267515] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.792 [2024-07-24 18:21:41.267523] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1539b10 name raid_bdev1, state offline 00:18:32.792 0 00:18:32.792 18:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2255008 00:18:32.792 18:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 2255008 ']' 00:18:32.792 18:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 2255008 00:18:32.792 18:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:18:32.792 18:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:32.792 18:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2255008 00:18:32.792 18:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:32.792 18:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:32.792 18:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2255008' 00:18:32.792 killing process with pid 2255008 00:18:32.792 18:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 2255008 00:18:32.792 [2024-07-24 18:21:41.338386] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.792 18:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 2255008 00:18:32.792 [2024-07-24 18:21:41.363356] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:33.052 18:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.XxWnk15w4c 00:18:33.052 18:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:33.052 18:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:33.052 18:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:33.052 18:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:33.052 18:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:33.052 18:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:33.052 18:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:33.052 00:18:33.052 real 0m5.844s 00:18:33.052 user 0m8.987s 00:18:33.052 sys 0m1.031s 00:18:33.052 18:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:33.052 18:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.052 ************************************ 00:18:33.052 END TEST raid_read_error_test 00:18:33.052 ************************************ 00:18:33.052 18:21:41 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:18:33.052 18:21:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:33.052 18:21:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:33.052 18:21:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.052 ************************************ 00:18:33.052 START TEST raid_write_error_test 00:18:33.052 ************************************ 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:33.052 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:33.311 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.cssO9wSvJ3 00:18:33.311 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:33.311 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=2256118 00:18:33.311 18:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 2256118 /var/tmp/spdk-raid.sock 00:18:33.311 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 2256118 ']' 00:18:33.311 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:33.311 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.311 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:33.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:33.311 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.311 18:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.311 [2024-07-24 18:21:41.683836] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:18:33.311 [2024-07-24 18:21:41.683879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256118 ] 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:01.0 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:01.1 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:01.2 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:01.3 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:01.4 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:01.5 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:01.6 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:01.7 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:02.0 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:02.1 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:02.2 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:02.3 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:02.4 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:02.5 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:02.6 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b3:02.7 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:01.0 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:01.1 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:01.2 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:01.3 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:01.4 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:01.5 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:01.6 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:01.7 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:02.0 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:02.1 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:02.2 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:02.3 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:02.4 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:02.5 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:02.6 cannot be used 00:18:33.311 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:33.311 EAL: Requested device 0000:b5:02.7 cannot be used 00:18:33.311 [2024-07-24 18:21:41.777429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.311 [2024-07-24 18:21:41.850324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.311 [2024-07-24 18:21:41.905332] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.311 [2024-07-24 18:21:41.905359] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.248 18:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.248 18:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:34.248 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:34.248 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:34.248 BaseBdev1_malloc 00:18:34.248 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:34.248 true 00:18:34.248 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:34.507 [2024-07-24 18:21:42.961642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:34.507 [2024-07-24 18:21:42.961674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.507 [2024-07-24 18:21:42.961691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2975ed0 00:18:34.507 [2024-07-24 18:21:42.961699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.507 [2024-07-24 18:21:42.962837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.507 [2024-07-24 18:21:42.962860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:34.507 BaseBdev1 00:18:34.507 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:34.507 18:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:34.766 BaseBdev2_malloc 00:18:34.766 18:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:34.766 true 00:18:34.766 18:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:35.096 [2024-07-24 18:21:43.478414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:35.096 [2024-07-24 18:21:43.478447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.096 [2024-07-24 18:21:43.478460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x297ab60 00:18:35.096 [2024-07-24 18:21:43.478467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.096 [2024-07-24 18:21:43.479461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.096 [2024-07-24 18:21:43.479484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:35.096 BaseBdev2 00:18:35.096 18:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:35.096 18:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:35.096 BaseBdev3_malloc 00:18:35.096 18:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:35.354 true 00:18:35.354 18:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:35.613 [2024-07-24 18:21:43.979406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:35.613 [2024-07-24 18:21:43.979439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.613 [2024-07-24 18:21:43.979455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x297bad0 00:18:35.613 [2024-07-24 18:21:43.979464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.613 [2024-07-24 18:21:43.980481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.613 [2024-07-24 18:21:43.980504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:35.613 BaseBdev3 00:18:35.613 18:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:35.613 18:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:35.613 BaseBdev4_malloc 00:18:35.613 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:35.872 true 00:18:35.872 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:36.131 [2024-07-24 18:21:44.492391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:36.131 [2024-07-24 18:21:44.492424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.131 [2024-07-24 18:21:44.492438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x297dd40 00:18:36.131 [2024-07-24 18:21:44.492447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.131 [2024-07-24 18:21:44.493450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.131 [2024-07-24 18:21:44.493472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:36.131 BaseBdev4 00:18:36.131 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:36.131 [2024-07-24 18:21:44.644809] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:36.131 [2024-07-24 18:21:44.645607] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:36.131 [2024-07-24 18:21:44.645660] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:36.131 [2024-07-24 18:21:44.645696] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:36.131 [2024-07-24 18:21:44.645847] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x297eb10 00:18:36.131 [2024-07-24 18:21:44.645854] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:36.131 [2024-07-24 18:21:44.645976] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x297fdc0 00:18:36.131 [2024-07-24 18:21:44.646073] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x297eb10 00:18:36.131 [2024-07-24 18:21:44.646080] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x297eb10 00:18:36.131 [2024-07-24 18:21:44.646143] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.131 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:36.131 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:36.131 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:36.131 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:36.131 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:36.131 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:36.131 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.131 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.131 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.131 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.131 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.131 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.389 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:36.389 "name": "raid_bdev1", 00:18:36.389 "uuid": "ae7a4607-8b56-4703-b426-f3004275e16b", 00:18:36.389 "strip_size_kb": 0, 00:18:36.389 "state": "online", 00:18:36.389 "raid_level": "raid1", 00:18:36.389 "superblock": true, 00:18:36.389 "num_base_bdevs": 4, 00:18:36.389 "num_base_bdevs_discovered": 4, 00:18:36.389 "num_base_bdevs_operational": 4, 00:18:36.389 "base_bdevs_list": [ 00:18:36.389 { 00:18:36.389 "name": "BaseBdev1", 00:18:36.389 "uuid": "4f3a37e4-fa6d-5bf7-a408-dd5bc698a3ba", 00:18:36.389 "is_configured": true, 00:18:36.389 "data_offset": 2048, 00:18:36.389 "data_size": 63488 00:18:36.389 }, 00:18:36.389 { 00:18:36.389 "name": "BaseBdev2", 00:18:36.389 "uuid": "f365e34f-acd0-5d32-8eee-00e3196eb316", 00:18:36.389 "is_configured": true, 00:18:36.389 "data_offset": 2048, 00:18:36.389 "data_size": 63488 00:18:36.389 }, 00:18:36.389 { 00:18:36.389 "name": "BaseBdev3", 00:18:36.389 "uuid": "b473b5ea-c1bf-50c7-9c48-dd3bc6d8fde4", 00:18:36.389 "is_configured": true, 00:18:36.389 "data_offset": 2048, 00:18:36.389 "data_size": 63488 00:18:36.389 }, 00:18:36.389 { 00:18:36.389 "name": "BaseBdev4", 00:18:36.389 "uuid": "091fcec0-7128-58ac-a63d-6b1ef8999f60", 00:18:36.389 "is_configured": true, 00:18:36.389 "data_offset": 2048, 00:18:36.389 "data_size": 63488 00:18:36.389 } 00:18:36.389 ] 00:18:36.389 }' 00:18:36.389 18:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:36.389 18:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.956 18:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:36.956 18:21:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:36.956 [2024-07-24 18:21:45.354859] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x297fdc0 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:37.892 [2024-07-24 18:21:46.441066] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:18:37.892 [2024-07-24 18:21:46.441109] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:37.892 [2024-07-24 18:21:46.441290] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x297fdc0 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.892 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.151 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:38.151 "name": "raid_bdev1", 00:18:38.151 "uuid": "ae7a4607-8b56-4703-b426-f3004275e16b", 00:18:38.151 "strip_size_kb": 0, 00:18:38.151 "state": "online", 00:18:38.151 "raid_level": "raid1", 00:18:38.151 "superblock": true, 00:18:38.151 "num_base_bdevs": 4, 00:18:38.151 "num_base_bdevs_discovered": 3, 00:18:38.151 "num_base_bdevs_operational": 3, 00:18:38.151 "base_bdevs_list": [ 00:18:38.151 { 00:18:38.151 "name": null, 00:18:38.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.151 "is_configured": false, 00:18:38.151 "data_offset": 2048, 00:18:38.151 "data_size": 63488 00:18:38.151 }, 00:18:38.151 { 00:18:38.151 "name": "BaseBdev2", 00:18:38.151 "uuid": "f365e34f-acd0-5d32-8eee-00e3196eb316", 00:18:38.151 "is_configured": true, 00:18:38.151 "data_offset": 2048, 00:18:38.151 "data_size": 63488 00:18:38.151 }, 00:18:38.151 { 00:18:38.151 "name": "BaseBdev3", 00:18:38.151 "uuid": "b473b5ea-c1bf-50c7-9c48-dd3bc6d8fde4", 00:18:38.151 "is_configured": true, 00:18:38.151 "data_offset": 2048, 00:18:38.151 "data_size": 63488 00:18:38.151 }, 00:18:38.151 { 00:18:38.151 "name": "BaseBdev4", 00:18:38.151 "uuid": "091fcec0-7128-58ac-a63d-6b1ef8999f60", 00:18:38.151 "is_configured": true, 00:18:38.151 "data_offset": 2048, 00:18:38.151 "data_size": 63488 00:18:38.151 } 00:18:38.151 ] 00:18:38.151 }' 00:18:38.151 18:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:38.151 18:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.718 18:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:38.718 [2024-07-24 18:21:47.217340] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.718 [2024-07-24 18:21:47.217370] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.718 [2024-07-24 18:21:47.219316] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.718 [2024-07-24 18:21:47.219347] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.718 [2024-07-24 18:21:47.219410] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.718 [2024-07-24 18:21:47.219418] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x297eb10 name raid_bdev1, state offline 00:18:38.718 0 00:18:38.718 18:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 2256118 00:18:38.718 18:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 2256118 ']' 00:18:38.718 18:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 2256118 00:18:38.718 18:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:18:38.718 18:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.718 18:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2256118 00:18:38.718 18:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:38.718 18:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:38.718 18:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2256118' 00:18:38.718 killing process with pid 2256118 00:18:38.718 18:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 2256118 00:18:38.718 [2024-07-24 18:21:47.293120] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.718 18:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 2256118 00:18:38.977 [2024-07-24 18:21:47.318024] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.977 18:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.cssO9wSvJ3 00:18:38.977 18:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:38.977 18:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:38.977 18:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:38.977 18:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:38.977 18:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:38.977 18:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:38.977 18:21:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:38.977 00:18:38.977 real 0m5.867s 00:18:38.977 user 0m9.013s 00:18:38.977 sys 0m1.043s 00:18:38.977 18:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:38.977 18:21:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.977 ************************************ 00:18:38.977 END TEST raid_write_error_test 00:18:38.977 ************************************ 00:18:38.977 18:21:47 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:18:38.977 18:21:47 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:18:38.977 18:21:47 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:18:38.977 18:21:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:38.977 18:21:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:38.978 18:21:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:39.237 ************************************ 00:18:39.237 START TEST raid_rebuild_test 00:18:39.237 ************************************ 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:39.237 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2257276 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2257276 /var/tmp/spdk-raid.sock 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 2257276 ']' 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:39.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.238 18:21:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.238 [2024-07-24 18:21:47.653819] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:18:39.238 [2024-07-24 18:21:47.653864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2257276 ] 00:18:39.238 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:39.238 Zero copy mechanism will not be used. 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:01.0 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:01.1 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:01.2 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:01.3 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:01.4 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:01.5 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:01.6 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:01.7 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:02.0 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:02.1 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:02.2 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:02.3 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:02.4 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:02.5 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:02.6 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b3:02.7 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:01.0 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:01.1 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:01.2 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:01.3 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:01.4 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:01.5 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:01.6 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:01.7 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:02.0 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:02.1 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:02.2 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:02.3 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:02.4 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:02.5 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:02.6 cannot be used 00:18:39.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:39.238 EAL: Requested device 0000:b5:02.7 cannot be used 00:18:39.238 [2024-07-24 18:21:47.745043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.238 [2024-07-24 18:21:47.812214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.497 [2024-07-24 18:21:47.864280] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.497 [2024-07-24 18:21:47.864310] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.064 18:21:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.064 18:21:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:18:40.064 18:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:40.064 18:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:40.064 BaseBdev1_malloc 00:18:40.064 18:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:40.323 [2024-07-24 18:21:48.772674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:40.323 [2024-07-24 18:21:48.772717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.323 [2024-07-24 18:21:48.772733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd19370 00:18:40.323 [2024-07-24 18:21:48.772742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.323 [2024-07-24 18:21:48.773857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.323 [2024-07-24 18:21:48.773881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:40.323 BaseBdev1 00:18:40.323 18:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:40.324 18:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:40.582 BaseBdev2_malloc 00:18:40.582 18:21:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:40.582 [2024-07-24 18:21:49.113009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:40.582 [2024-07-24 18:21:49.113040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.582 [2024-07-24 18:21:49.113052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xebce70 00:18:40.582 [2024-07-24 18:21:49.113060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.582 [2024-07-24 18:21:49.113994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.582 [2024-07-24 18:21:49.114014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:40.582 BaseBdev2 00:18:40.582 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:40.840 spare_malloc 00:18:40.841 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:41.099 spare_delay 00:18:41.099 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:41.099 [2024-07-24 18:21:49.621870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:41.099 [2024-07-24 18:21:49.621899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.099 [2024-07-24 18:21:49.621911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xebc4b0 00:18:41.099 [2024-07-24 18:21:49.621919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.099 [2024-07-24 18:21:49.622788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.099 [2024-07-24 18:21:49.622808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:41.099 spare 00:18:41.099 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:41.358 [2024-07-24 18:21:49.790322] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.358 [2024-07-24 18:21:49.791113] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:41.358 [2024-07-24 18:21:49.791165] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xd11030 00:18:41.358 [2024-07-24 18:21:49.791172] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:41.358 [2024-07-24 18:21:49.791292] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xebd100 00:18:41.358 [2024-07-24 18:21:49.791383] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd11030 00:18:41.358 [2024-07-24 18:21:49.791390] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd11030 00:18:41.358 [2024-07-24 18:21:49.791459] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.358 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:41.358 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:41.358 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:41.358 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:41.358 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:41.358 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:41.358 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.358 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.358 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.358 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.358 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.358 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.616 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.616 "name": "raid_bdev1", 00:18:41.616 "uuid": "4565bdc1-b533-4551-a68c-cbd16b88cbad", 00:18:41.616 "strip_size_kb": 0, 00:18:41.616 "state": "online", 00:18:41.616 "raid_level": "raid1", 00:18:41.616 "superblock": false, 00:18:41.616 "num_base_bdevs": 2, 00:18:41.616 "num_base_bdevs_discovered": 2, 00:18:41.616 "num_base_bdevs_operational": 2, 00:18:41.616 "base_bdevs_list": [ 00:18:41.616 { 00:18:41.616 "name": "BaseBdev1", 00:18:41.616 "uuid": "31282b86-c19c-5a40-ab6c-8561459d5f78", 00:18:41.616 "is_configured": true, 00:18:41.616 "data_offset": 0, 00:18:41.616 "data_size": 65536 00:18:41.616 }, 00:18:41.616 { 00:18:41.616 "name": "BaseBdev2", 00:18:41.616 "uuid": "4fa606e6-2486-5582-a7c9-4e022b215c11", 00:18:41.616 "is_configured": true, 00:18:41.616 "data_offset": 0, 00:18:41.616 "data_size": 65536 00:18:41.616 } 00:18:41.616 ] 00:18:41.616 }' 00:18:41.616 18:21:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.616 18:21:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.184 18:21:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:18:42.184 18:21:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:42.184 [2024-07-24 18:21:50.648701] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.184 18:21:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:18:42.184 18:21:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:42.184 18:21:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:42.444 18:21:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:42.444 [2024-07-24 18:21:51.009496] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xebd100 00:18:42.444 /dev/nbd0 00:18:42.444 18:21:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:42.703 1+0 records in 00:18:42.703 1+0 records out 00:18:42.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155222 s, 26.4 MB/s 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:18:42.703 18:21:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:46.894 65536+0 records in 00:18:46.894 65536+0 records out 00:18:46.894 33554432 bytes (34 MB, 32 MiB) copied, 3.9196 s, 8.6 MB/s 00:18:46.894 18:21:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:46.894 18:21:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:46.894 18:21:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:46.894 18:21:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.894 18:21:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:46.894 18:21:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.894 18:21:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:46.894 [2024-07-24 18:21:55.178117] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:46.894 [2024-07-24 18:21:55.338562] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.894 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.152 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.152 "name": "raid_bdev1", 00:18:47.152 "uuid": "4565bdc1-b533-4551-a68c-cbd16b88cbad", 00:18:47.152 "strip_size_kb": 0, 00:18:47.152 "state": "online", 00:18:47.152 "raid_level": "raid1", 00:18:47.152 "superblock": false, 00:18:47.152 "num_base_bdevs": 2, 00:18:47.152 "num_base_bdevs_discovered": 1, 00:18:47.152 "num_base_bdevs_operational": 1, 00:18:47.152 "base_bdevs_list": [ 00:18:47.152 { 00:18:47.152 "name": null, 00:18:47.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.152 "is_configured": false, 00:18:47.152 "data_offset": 0, 00:18:47.152 "data_size": 65536 00:18:47.152 }, 00:18:47.152 { 00:18:47.152 "name": "BaseBdev2", 00:18:47.152 "uuid": "4fa606e6-2486-5582-a7c9-4e022b215c11", 00:18:47.152 "is_configured": true, 00:18:47.152 "data_offset": 0, 00:18:47.152 "data_size": 65536 00:18:47.152 } 00:18:47.152 ] 00:18:47.152 }' 00:18:47.152 18:21:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.152 18:21:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.720 18:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:47.720 [2024-07-24 18:21:56.160702] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.721 [2024-07-24 18:21:56.165152] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xeb0fd0 00:18:47.721 [2024-07-24 18:21:56.166735] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:47.721 18:21:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:18:48.655 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.655 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:48.655 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:48.655 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:48.655 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:48.655 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.655 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.914 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:48.914 "name": "raid_bdev1", 00:18:48.914 "uuid": "4565bdc1-b533-4551-a68c-cbd16b88cbad", 00:18:48.914 "strip_size_kb": 0, 00:18:48.914 "state": "online", 00:18:48.914 "raid_level": "raid1", 00:18:48.914 "superblock": false, 00:18:48.914 "num_base_bdevs": 2, 00:18:48.914 "num_base_bdevs_discovered": 2, 00:18:48.914 "num_base_bdevs_operational": 2, 00:18:48.914 "process": { 00:18:48.914 "type": "rebuild", 00:18:48.914 "target": "spare", 00:18:48.914 "progress": { 00:18:48.914 "blocks": 22528, 00:18:48.914 "percent": 34 00:18:48.914 } 00:18:48.914 }, 00:18:48.914 "base_bdevs_list": [ 00:18:48.914 { 00:18:48.914 "name": "spare", 00:18:48.914 "uuid": "38c9f2c8-1f4d-5f8a-9c04-5a5f3c48e3cc", 00:18:48.914 "is_configured": true, 00:18:48.914 "data_offset": 0, 00:18:48.914 "data_size": 65536 00:18:48.914 }, 00:18:48.914 { 00:18:48.914 "name": "BaseBdev2", 00:18:48.914 "uuid": "4fa606e6-2486-5582-a7c9-4e022b215c11", 00:18:48.914 "is_configured": true, 00:18:48.914 "data_offset": 0, 00:18:48.914 "data_size": 65536 00:18:48.914 } 00:18:48.914 ] 00:18:48.914 }' 00:18:48.914 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:48.914 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.914 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:48.914 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.914 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:49.172 [2024-07-24 18:21:57.597354] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.172 [2024-07-24 18:21:57.677158] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:49.172 [2024-07-24 18:21:57.677192] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.172 [2024-07-24 18:21:57.677202] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.172 [2024-07-24 18:21:57.677207] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:49.172 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.172 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:49.172 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:49.172 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:49.172 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:49.172 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:49.172 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.172 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.172 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.172 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.172 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.172 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.431 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.431 "name": "raid_bdev1", 00:18:49.431 "uuid": "4565bdc1-b533-4551-a68c-cbd16b88cbad", 00:18:49.431 "strip_size_kb": 0, 00:18:49.431 "state": "online", 00:18:49.431 "raid_level": "raid1", 00:18:49.431 "superblock": false, 00:18:49.431 "num_base_bdevs": 2, 00:18:49.431 "num_base_bdevs_discovered": 1, 00:18:49.431 "num_base_bdevs_operational": 1, 00:18:49.431 "base_bdevs_list": [ 00:18:49.431 { 00:18:49.431 "name": null, 00:18:49.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.431 "is_configured": false, 00:18:49.431 "data_offset": 0, 00:18:49.431 "data_size": 65536 00:18:49.431 }, 00:18:49.431 { 00:18:49.431 "name": "BaseBdev2", 00:18:49.431 "uuid": "4fa606e6-2486-5582-a7c9-4e022b215c11", 00:18:49.431 "is_configured": true, 00:18:49.431 "data_offset": 0, 00:18:49.431 "data_size": 65536 00:18:49.431 } 00:18:49.431 ] 00:18:49.431 }' 00:18:49.431 18:21:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.431 18:21:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.999 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.999 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:49.999 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:49.999 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:49.999 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:49.999 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.000 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.000 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:50.000 "name": "raid_bdev1", 00:18:50.000 "uuid": "4565bdc1-b533-4551-a68c-cbd16b88cbad", 00:18:50.000 "strip_size_kb": 0, 00:18:50.000 "state": "online", 00:18:50.000 "raid_level": "raid1", 00:18:50.000 "superblock": false, 00:18:50.000 "num_base_bdevs": 2, 00:18:50.000 "num_base_bdevs_discovered": 1, 00:18:50.000 "num_base_bdevs_operational": 1, 00:18:50.000 "base_bdevs_list": [ 00:18:50.000 { 00:18:50.000 "name": null, 00:18:50.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.000 "is_configured": false, 00:18:50.000 "data_offset": 0, 00:18:50.000 "data_size": 65536 00:18:50.000 }, 00:18:50.000 { 00:18:50.000 "name": "BaseBdev2", 00:18:50.000 "uuid": "4fa606e6-2486-5582-a7c9-4e022b215c11", 00:18:50.000 "is_configured": true, 00:18:50.000 "data_offset": 0, 00:18:50.000 "data_size": 65536 00:18:50.000 } 00:18:50.000 ] 00:18:50.000 }' 00:18:50.000 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:50.000 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:50.000 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:50.259 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:50.259 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.259 [2024-07-24 18:21:58.763979] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.259 [2024-07-24 18:21:58.768379] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xeb0fd0 00:18:50.259 [2024-07-24 18:21:58.769447] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.259 18:21:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:51.195 18:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.195 18:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:51.195 18:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:51.195 18:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:51.195 18:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:51.195 18:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.195 18:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.453 18:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:51.453 "name": "raid_bdev1", 00:18:51.453 "uuid": "4565bdc1-b533-4551-a68c-cbd16b88cbad", 00:18:51.453 "strip_size_kb": 0, 00:18:51.453 "state": "online", 00:18:51.453 "raid_level": "raid1", 00:18:51.453 "superblock": false, 00:18:51.453 "num_base_bdevs": 2, 00:18:51.453 "num_base_bdevs_discovered": 2, 00:18:51.453 "num_base_bdevs_operational": 2, 00:18:51.453 "process": { 00:18:51.454 "type": "rebuild", 00:18:51.454 "target": "spare", 00:18:51.454 "progress": { 00:18:51.454 "blocks": 22528, 00:18:51.454 "percent": 34 00:18:51.454 } 00:18:51.454 }, 00:18:51.454 "base_bdevs_list": [ 00:18:51.454 { 00:18:51.454 "name": "spare", 00:18:51.454 "uuid": "38c9f2c8-1f4d-5f8a-9c04-5a5f3c48e3cc", 00:18:51.454 "is_configured": true, 00:18:51.454 "data_offset": 0, 00:18:51.454 "data_size": 65536 00:18:51.454 }, 00:18:51.454 { 00:18:51.454 "name": "BaseBdev2", 00:18:51.454 "uuid": "4fa606e6-2486-5582-a7c9-4e022b215c11", 00:18:51.454 "is_configured": true, 00:18:51.454 "data_offset": 0, 00:18:51.454 "data_size": 65536 00:18:51.454 } 00:18:51.454 ] 00:18:51.454 }' 00:18:51.454 18:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:51.454 18:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.454 18:21:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=584 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.454 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.730 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:51.730 "name": "raid_bdev1", 00:18:51.730 "uuid": "4565bdc1-b533-4551-a68c-cbd16b88cbad", 00:18:51.730 "strip_size_kb": 0, 00:18:51.730 "state": "online", 00:18:51.730 "raid_level": "raid1", 00:18:51.730 "superblock": false, 00:18:51.730 "num_base_bdevs": 2, 00:18:51.730 "num_base_bdevs_discovered": 2, 00:18:51.730 "num_base_bdevs_operational": 2, 00:18:51.730 "process": { 00:18:51.730 "type": "rebuild", 00:18:51.730 "target": "spare", 00:18:51.730 "progress": { 00:18:51.730 "blocks": 26624, 00:18:51.730 "percent": 40 00:18:51.730 } 00:18:51.730 }, 00:18:51.730 "base_bdevs_list": [ 00:18:51.730 { 00:18:51.730 "name": "spare", 00:18:51.730 "uuid": "38c9f2c8-1f4d-5f8a-9c04-5a5f3c48e3cc", 00:18:51.730 "is_configured": true, 00:18:51.730 "data_offset": 0, 00:18:51.730 "data_size": 65536 00:18:51.730 }, 00:18:51.730 { 00:18:51.730 "name": "BaseBdev2", 00:18:51.730 "uuid": "4fa606e6-2486-5582-a7c9-4e022b215c11", 00:18:51.730 "is_configured": true, 00:18:51.730 "data_offset": 0, 00:18:51.730 "data_size": 65536 00:18:51.730 } 00:18:51.730 ] 00:18:51.730 }' 00:18:51.730 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:51.730 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.730 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:51.730 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.730 18:22:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:53.147 "name": "raid_bdev1", 00:18:53.147 "uuid": "4565bdc1-b533-4551-a68c-cbd16b88cbad", 00:18:53.147 "strip_size_kb": 0, 00:18:53.147 "state": "online", 00:18:53.147 "raid_level": "raid1", 00:18:53.147 "superblock": false, 00:18:53.147 "num_base_bdevs": 2, 00:18:53.147 "num_base_bdevs_discovered": 2, 00:18:53.147 "num_base_bdevs_operational": 2, 00:18:53.147 "process": { 00:18:53.147 "type": "rebuild", 00:18:53.147 "target": "spare", 00:18:53.147 "progress": { 00:18:53.147 "blocks": 53248, 00:18:53.147 "percent": 81 00:18:53.147 } 00:18:53.147 }, 00:18:53.147 "base_bdevs_list": [ 00:18:53.147 { 00:18:53.147 "name": "spare", 00:18:53.147 "uuid": "38c9f2c8-1f4d-5f8a-9c04-5a5f3c48e3cc", 00:18:53.147 "is_configured": true, 00:18:53.147 "data_offset": 0, 00:18:53.147 "data_size": 65536 00:18:53.147 }, 00:18:53.147 { 00:18:53.147 "name": "BaseBdev2", 00:18:53.147 "uuid": "4fa606e6-2486-5582-a7c9-4e022b215c11", 00:18:53.147 "is_configured": true, 00:18:53.147 "data_offset": 0, 00:18:53.147 "data_size": 65536 00:18:53.147 } 00:18:53.147 ] 00:18:53.147 }' 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.147 18:22:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:53.406 [2024-07-24 18:22:01.991379] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:53.406 [2024-07-24 18:22:01.991420] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:53.406 [2024-07-24 18:22:01.991449] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.974 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:53.974 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.974 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:53.974 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:53.974 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:53.974 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:53.974 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.974 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.234 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:54.234 "name": "raid_bdev1", 00:18:54.234 "uuid": "4565bdc1-b533-4551-a68c-cbd16b88cbad", 00:18:54.234 "strip_size_kb": 0, 00:18:54.234 "state": "online", 00:18:54.234 "raid_level": "raid1", 00:18:54.234 "superblock": false, 00:18:54.234 "num_base_bdevs": 2, 00:18:54.234 "num_base_bdevs_discovered": 2, 00:18:54.234 "num_base_bdevs_operational": 2, 00:18:54.234 "base_bdevs_list": [ 00:18:54.234 { 00:18:54.234 "name": "spare", 00:18:54.234 "uuid": "38c9f2c8-1f4d-5f8a-9c04-5a5f3c48e3cc", 00:18:54.234 "is_configured": true, 00:18:54.234 "data_offset": 0, 00:18:54.234 "data_size": 65536 00:18:54.234 }, 00:18:54.234 { 00:18:54.234 "name": "BaseBdev2", 00:18:54.234 "uuid": "4fa606e6-2486-5582-a7c9-4e022b215c11", 00:18:54.234 "is_configured": true, 00:18:54.234 "data_offset": 0, 00:18:54.234 "data_size": 65536 00:18:54.234 } 00:18:54.234 ] 00:18:54.234 }' 00:18:54.234 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:54.234 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:54.234 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:54.234 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:54.234 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:18:54.234 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.234 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:54.234 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:54.234 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:54.234 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:54.234 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.234 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.494 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:54.494 "name": "raid_bdev1", 00:18:54.494 "uuid": "4565bdc1-b533-4551-a68c-cbd16b88cbad", 00:18:54.494 "strip_size_kb": 0, 00:18:54.494 "state": "online", 00:18:54.494 "raid_level": "raid1", 00:18:54.494 "superblock": false, 00:18:54.494 "num_base_bdevs": 2, 00:18:54.494 "num_base_bdevs_discovered": 2, 00:18:54.494 "num_base_bdevs_operational": 2, 00:18:54.494 "base_bdevs_list": [ 00:18:54.494 { 00:18:54.494 "name": "spare", 00:18:54.494 "uuid": "38c9f2c8-1f4d-5f8a-9c04-5a5f3c48e3cc", 00:18:54.494 "is_configured": true, 00:18:54.494 "data_offset": 0, 00:18:54.494 "data_size": 65536 00:18:54.494 }, 00:18:54.494 { 00:18:54.494 "name": "BaseBdev2", 00:18:54.494 "uuid": "4fa606e6-2486-5582-a7c9-4e022b215c11", 00:18:54.494 "is_configured": true, 00:18:54.494 "data_offset": 0, 00:18:54.494 "data_size": 65536 00:18:54.494 } 00:18:54.494 ] 00:18:54.494 }' 00:18:54.494 18:22:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.494 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.753 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:54.754 "name": "raid_bdev1", 00:18:54.754 "uuid": "4565bdc1-b533-4551-a68c-cbd16b88cbad", 00:18:54.754 "strip_size_kb": 0, 00:18:54.754 "state": "online", 00:18:54.754 "raid_level": "raid1", 00:18:54.754 "superblock": false, 00:18:54.754 "num_base_bdevs": 2, 00:18:54.754 "num_base_bdevs_discovered": 2, 00:18:54.754 "num_base_bdevs_operational": 2, 00:18:54.754 "base_bdevs_list": [ 00:18:54.754 { 00:18:54.754 "name": "spare", 00:18:54.754 "uuid": "38c9f2c8-1f4d-5f8a-9c04-5a5f3c48e3cc", 00:18:54.754 "is_configured": true, 00:18:54.754 "data_offset": 0, 00:18:54.754 "data_size": 65536 00:18:54.754 }, 00:18:54.754 { 00:18:54.754 "name": "BaseBdev2", 00:18:54.754 "uuid": "4fa606e6-2486-5582-a7c9-4e022b215c11", 00:18:54.754 "is_configured": true, 00:18:54.754 "data_offset": 0, 00:18:54.754 "data_size": 65536 00:18:54.754 } 00:18:54.754 ] 00:18:54.754 }' 00:18:54.754 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:54.754 18:22:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.320 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:55.320 [2024-07-24 18:22:03.875870] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:55.320 [2024-07-24 18:22:03.875891] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.320 [2024-07-24 18:22:03.875939] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.320 [2024-07-24 18:22:03.875981] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:55.320 [2024-07-24 18:22:03.875988] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd11030 name raid_bdev1, state offline 00:18:55.321 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.321 18:22:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:18:55.580 18:22:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:18:55.580 18:22:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:18:55.580 18:22:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:55.580 18:22:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:55.580 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:55.580 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:55.580 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:55.580 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:55.580 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:55.580 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:55.580 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:55.580 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.580 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:55.839 /dev/nbd0 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:55.839 1+0 records in 00:18:55.839 1+0 records out 00:18:55.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278227 s, 14.7 MB/s 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:55.839 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:18:56.097 /dev/nbd1 00:18:56.097 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:56.097 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:56.097 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:56.097 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:18:56.097 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:56.097 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:56.097 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:56.097 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:18:56.097 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:56.097 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:56.097 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:56.097 1+0 records in 00:18:56.097 1+0 records out 00:18:56.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264709 s, 15.5 MB/s 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.098 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:56.356 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:56.356 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:56.356 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:56.356 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:56.356 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:56.356 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:56.356 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:56.356 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:56.356 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:56.356 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:56.356 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2257276 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 2257276 ']' 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 2257276 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:56.615 18:22:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2257276 00:18:56.615 18:22:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:56.615 18:22:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:56.615 18:22:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2257276' 00:18:56.615 killing process with pid 2257276 00:18:56.615 18:22:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 2257276 00:18:56.615 Received shutdown signal, test time was about 60.000000 seconds 00:18:56.615 00:18:56.615 Latency(us) 00:18:56.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.615 =================================================================================================================== 00:18:56.615 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:56.615 [2024-07-24 18:22:05.012565] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.615 18:22:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 2257276 00:18:56.615 [2024-07-24 18:22:05.035766] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:56.615 18:22:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:18:56.615 00:18:56.615 real 0m17.616s 00:18:56.615 user 0m23.011s 00:18:56.615 sys 0m3.978s 00:18:56.615 18:22:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:56.615 18:22:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.615 ************************************ 00:18:56.615 END TEST raid_rebuild_test 00:18:56.615 ************************************ 00:18:56.875 18:22:05 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:18:56.875 18:22:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:56.875 18:22:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:56.875 18:22:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.875 ************************************ 00:18:56.875 START TEST raid_rebuild_test_sb 00:18:56.875 ************************************ 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2260463 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2260463 /var/tmp/spdk-raid.sock 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2260463 ']' 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:56.875 18:22:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.876 18:22:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:56.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:56.876 18:22:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.876 18:22:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.876 [2024-07-24 18:22:05.357420] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:18:56.876 [2024-07-24 18:22:05.357468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260463 ] 00:18:56.876 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:56.876 Zero copy mechanism will not be used. 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:01.0 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:01.1 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:01.2 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:01.3 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:01.4 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:01.5 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:01.6 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:01.7 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:02.0 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:02.1 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:02.2 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:02.3 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:02.4 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:02.5 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:02.6 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b3:02.7 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:01.0 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:01.1 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:01.2 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:01.3 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:01.4 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:01.5 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:01.6 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:01.7 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:02.0 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:02.1 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:02.2 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:02.3 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:02.4 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:02.5 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:02.6 cannot be used 00:18:56.876 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:56.876 EAL: Requested device 0000:b5:02.7 cannot be used 00:18:56.876 [2024-07-24 18:22:05.452817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.135 [2024-07-24 18:22:05.533124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.135 [2024-07-24 18:22:05.584161] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.135 [2024-07-24 18:22:05.584189] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.705 18:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.705 18:22:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:57.705 18:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:57.705 18:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:57.965 BaseBdev1_malloc 00:18:57.965 18:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:57.965 [2024-07-24 18:22:06.472295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:57.965 [2024-07-24 18:22:06.472330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.965 [2024-07-24 18:22:06.472352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14ca370 00:18:57.965 [2024-07-24 18:22:06.472360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.965 [2024-07-24 18:22:06.473455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.965 [2024-07-24 18:22:06.473479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:57.965 BaseBdev1 00:18:57.965 18:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:57.965 18:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:58.224 BaseBdev2_malloc 00:18:58.224 18:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:58.224 [2024-07-24 18:22:06.796856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:58.224 [2024-07-24 18:22:06.796890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.224 [2024-07-24 18:22:06.796904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x166de70 00:18:58.224 [2024-07-24 18:22:06.796912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.224 [2024-07-24 18:22:06.797948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.225 [2024-07-24 18:22:06.797969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:58.225 BaseBdev2 00:18:58.225 18:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:58.483 spare_malloc 00:18:58.483 18:22:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:58.742 spare_delay 00:18:58.742 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:58.742 [2024-07-24 18:22:07.309741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:58.742 [2024-07-24 18:22:07.309776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.742 [2024-07-24 18:22:07.309789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x166d4b0 00:18:58.742 [2024-07-24 18:22:07.309797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.742 [2024-07-24 18:22:07.310786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.742 [2024-07-24 18:22:07.310808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:58.742 spare 00:18:58.742 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:59.001 [2024-07-24 18:22:07.466170] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.001 [2024-07-24 18:22:07.466965] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:59.001 [2024-07-24 18:22:07.467077] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x14c2030 00:18:59.001 [2024-07-24 18:22:07.467086] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:59.001 [2024-07-24 18:22:07.467209] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x166e100 00:18:59.002 [2024-07-24 18:22:07.467296] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14c2030 00:18:59.002 [2024-07-24 18:22:07.467302] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14c2030 00:18:59.002 [2024-07-24 18:22:07.467365] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.002 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:59.002 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:59.002 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:59.002 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:59.002 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:59.002 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:59.002 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:59.002 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:59.002 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:59.002 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:59.002 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.002 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.261 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.261 "name": "raid_bdev1", 00:18:59.261 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:18:59.261 "strip_size_kb": 0, 00:18:59.261 "state": "online", 00:18:59.261 "raid_level": "raid1", 00:18:59.261 "superblock": true, 00:18:59.261 "num_base_bdevs": 2, 00:18:59.261 "num_base_bdevs_discovered": 2, 00:18:59.261 "num_base_bdevs_operational": 2, 00:18:59.261 "base_bdevs_list": [ 00:18:59.261 { 00:18:59.261 "name": "BaseBdev1", 00:18:59.261 "uuid": "fdfc7552-7fef-550f-8a6d-7f93a02e4b98", 00:18:59.261 "is_configured": true, 00:18:59.261 "data_offset": 2048, 00:18:59.261 "data_size": 63488 00:18:59.261 }, 00:18:59.261 { 00:18:59.261 "name": "BaseBdev2", 00:18:59.261 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:18:59.261 "is_configured": true, 00:18:59.261 "data_offset": 2048, 00:18:59.261 "data_size": 63488 00:18:59.261 } 00:18:59.261 ] 00:18:59.261 }' 00:18:59.261 18:22:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.261 18:22:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.830 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:59.830 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:18:59.830 [2024-07-24 18:22:08.288418] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.830 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:18:59.830 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.830 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:00.090 [2024-07-24 18:22:08.641211] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x166e100 00:19:00.090 /dev/nbd0 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:00.090 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:00.349 1+0 records in 00:19:00.349 1+0 records out 00:19:00.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257006 s, 15.9 MB/s 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:19:00.349 18:22:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:04.545 63488+0 records in 00:19:04.545 63488+0 records out 00:19:04.545 32505856 bytes (33 MB, 31 MiB) copied, 3.79848 s, 8.6 MB/s 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:04.545 [2024-07-24 18:22:12.697161] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:04.545 [2024-07-24 18:22:12.854676] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.545 18:22:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.545 18:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.545 "name": "raid_bdev1", 00:19:04.545 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:04.545 "strip_size_kb": 0, 00:19:04.545 "state": "online", 00:19:04.545 "raid_level": "raid1", 00:19:04.545 "superblock": true, 00:19:04.545 "num_base_bdevs": 2, 00:19:04.545 "num_base_bdevs_discovered": 1, 00:19:04.545 "num_base_bdevs_operational": 1, 00:19:04.545 "base_bdevs_list": [ 00:19:04.545 { 00:19:04.545 "name": null, 00:19:04.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.545 "is_configured": false, 00:19:04.545 "data_offset": 2048, 00:19:04.545 "data_size": 63488 00:19:04.545 }, 00:19:04.545 { 00:19:04.545 "name": "BaseBdev2", 00:19:04.545 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:04.545 "is_configured": true, 00:19:04.545 "data_offset": 2048, 00:19:04.545 "data_size": 63488 00:19:04.545 } 00:19:04.545 ] 00:19:04.545 }' 00:19:04.545 18:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.545 18:22:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.113 18:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:05.113 [2024-07-24 18:22:13.676792] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:05.113 [2024-07-24 18:22:13.681119] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x166e100 00:19:05.113 [2024-07-24 18:22:13.682688] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:05.113 18:22:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:19:06.491 18:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.491 18:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:06.491 18:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:06.491 18:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:06.491 18:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:06.491 18:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.491 18:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.491 18:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.491 "name": "raid_bdev1", 00:19:06.491 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:06.491 "strip_size_kb": 0, 00:19:06.491 "state": "online", 00:19:06.492 "raid_level": "raid1", 00:19:06.492 "superblock": true, 00:19:06.492 "num_base_bdevs": 2, 00:19:06.492 "num_base_bdevs_discovered": 2, 00:19:06.492 "num_base_bdevs_operational": 2, 00:19:06.492 "process": { 00:19:06.492 "type": "rebuild", 00:19:06.492 "target": "spare", 00:19:06.492 "progress": { 00:19:06.492 "blocks": 22528, 00:19:06.492 "percent": 35 00:19:06.492 } 00:19:06.492 }, 00:19:06.492 "base_bdevs_list": [ 00:19:06.492 { 00:19:06.492 "name": "spare", 00:19:06.492 "uuid": "755d55d6-b70c-5d7f-a65e-9bbc7feee490", 00:19:06.492 "is_configured": true, 00:19:06.492 "data_offset": 2048, 00:19:06.492 "data_size": 63488 00:19:06.492 }, 00:19:06.492 { 00:19:06.492 "name": "BaseBdev2", 00:19:06.492 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:06.492 "is_configured": true, 00:19:06.492 "data_offset": 2048, 00:19:06.492 "data_size": 63488 00:19:06.492 } 00:19:06.492 ] 00:19:06.492 }' 00:19:06.492 18:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:06.492 18:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.492 18:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:06.492 18:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.492 18:22:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:06.750 [2024-07-24 18:22:15.101182] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.750 [2024-07-24 18:22:15.193210] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:06.750 [2024-07-24 18:22:15.193243] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.750 [2024-07-24 18:22:15.193253] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.750 [2024-07-24 18:22:15.193258] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:06.750 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.750 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:06.750 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:06.751 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:06.751 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:06.751 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:06.751 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:06.751 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:06.751 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:06.751 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:06.751 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.751 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.009 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:07.009 "name": "raid_bdev1", 00:19:07.009 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:07.009 "strip_size_kb": 0, 00:19:07.009 "state": "online", 00:19:07.009 "raid_level": "raid1", 00:19:07.009 "superblock": true, 00:19:07.009 "num_base_bdevs": 2, 00:19:07.009 "num_base_bdevs_discovered": 1, 00:19:07.009 "num_base_bdevs_operational": 1, 00:19:07.009 "base_bdevs_list": [ 00:19:07.009 { 00:19:07.009 "name": null, 00:19:07.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.009 "is_configured": false, 00:19:07.009 "data_offset": 2048, 00:19:07.009 "data_size": 63488 00:19:07.009 }, 00:19:07.009 { 00:19:07.009 "name": "BaseBdev2", 00:19:07.009 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:07.009 "is_configured": true, 00:19:07.009 "data_offset": 2048, 00:19:07.009 "data_size": 63488 00:19:07.009 } 00:19:07.009 ] 00:19:07.009 }' 00:19:07.009 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:07.009 18:22:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.577 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:07.577 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:07.577 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:07.577 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:07.577 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:07.577 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.577 18:22:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.577 18:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:07.577 "name": "raid_bdev1", 00:19:07.577 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:07.577 "strip_size_kb": 0, 00:19:07.577 "state": "online", 00:19:07.577 "raid_level": "raid1", 00:19:07.577 "superblock": true, 00:19:07.577 "num_base_bdevs": 2, 00:19:07.577 "num_base_bdevs_discovered": 1, 00:19:07.577 "num_base_bdevs_operational": 1, 00:19:07.577 "base_bdevs_list": [ 00:19:07.577 { 00:19:07.577 "name": null, 00:19:07.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.577 "is_configured": false, 00:19:07.577 "data_offset": 2048, 00:19:07.577 "data_size": 63488 00:19:07.577 }, 00:19:07.577 { 00:19:07.577 "name": "BaseBdev2", 00:19:07.577 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:07.577 "is_configured": true, 00:19:07.577 "data_offset": 2048, 00:19:07.577 "data_size": 63488 00:19:07.577 } 00:19:07.577 ] 00:19:07.577 }' 00:19:07.577 18:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:07.577 18:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:07.577 18:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:07.577 18:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:07.577 18:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:07.836 [2024-07-24 18:22:16.296015] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:07.836 [2024-07-24 18:22:16.300333] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1662630 00:19:07.836 [2024-07-24 18:22:16.301384] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:07.836 18:22:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:08.821 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:08.821 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:08.821 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:08.821 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:08.821 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:08.821 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.821 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.080 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:09.080 "name": "raid_bdev1", 00:19:09.081 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:09.081 "strip_size_kb": 0, 00:19:09.081 "state": "online", 00:19:09.081 "raid_level": "raid1", 00:19:09.081 "superblock": true, 00:19:09.081 "num_base_bdevs": 2, 00:19:09.081 "num_base_bdevs_discovered": 2, 00:19:09.081 "num_base_bdevs_operational": 2, 00:19:09.081 "process": { 00:19:09.081 "type": "rebuild", 00:19:09.081 "target": "spare", 00:19:09.081 "progress": { 00:19:09.081 "blocks": 22528, 00:19:09.081 "percent": 35 00:19:09.081 } 00:19:09.081 }, 00:19:09.081 "base_bdevs_list": [ 00:19:09.081 { 00:19:09.081 "name": "spare", 00:19:09.081 "uuid": "755d55d6-b70c-5d7f-a65e-9bbc7feee490", 00:19:09.081 "is_configured": true, 00:19:09.081 "data_offset": 2048, 00:19:09.081 "data_size": 63488 00:19:09.081 }, 00:19:09.081 { 00:19:09.081 "name": "BaseBdev2", 00:19:09.081 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:09.081 "is_configured": true, 00:19:09.081 "data_offset": 2048, 00:19:09.081 "data_size": 63488 00:19:09.081 } 00:19:09.081 ] 00:19:09.081 }' 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:19:09.081 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=601 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.081 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.340 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:09.340 "name": "raid_bdev1", 00:19:09.340 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:09.340 "strip_size_kb": 0, 00:19:09.340 "state": "online", 00:19:09.340 "raid_level": "raid1", 00:19:09.340 "superblock": true, 00:19:09.340 "num_base_bdevs": 2, 00:19:09.340 "num_base_bdevs_discovered": 2, 00:19:09.340 "num_base_bdevs_operational": 2, 00:19:09.340 "process": { 00:19:09.340 "type": "rebuild", 00:19:09.340 "target": "spare", 00:19:09.340 "progress": { 00:19:09.340 "blocks": 28672, 00:19:09.340 "percent": 45 00:19:09.340 } 00:19:09.340 }, 00:19:09.340 "base_bdevs_list": [ 00:19:09.340 { 00:19:09.340 "name": "spare", 00:19:09.340 "uuid": "755d55d6-b70c-5d7f-a65e-9bbc7feee490", 00:19:09.340 "is_configured": true, 00:19:09.340 "data_offset": 2048, 00:19:09.340 "data_size": 63488 00:19:09.340 }, 00:19:09.340 { 00:19:09.340 "name": "BaseBdev2", 00:19:09.340 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:09.340 "is_configured": true, 00:19:09.340 "data_offset": 2048, 00:19:09.340 "data_size": 63488 00:19:09.340 } 00:19:09.340 ] 00:19:09.340 }' 00:19:09.340 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:09.340 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.340 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:09.340 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.340 18:22:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:10.277 18:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:10.277 18:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.277 18:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:10.277 18:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:10.277 18:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:10.277 18:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:10.277 18:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.277 18:22:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.536 18:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:10.536 "name": "raid_bdev1", 00:19:10.536 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:10.536 "strip_size_kb": 0, 00:19:10.536 "state": "online", 00:19:10.536 "raid_level": "raid1", 00:19:10.536 "superblock": true, 00:19:10.536 "num_base_bdevs": 2, 00:19:10.536 "num_base_bdevs_discovered": 2, 00:19:10.536 "num_base_bdevs_operational": 2, 00:19:10.536 "process": { 00:19:10.536 "type": "rebuild", 00:19:10.536 "target": "spare", 00:19:10.536 "progress": { 00:19:10.536 "blocks": 53248, 00:19:10.536 "percent": 83 00:19:10.536 } 00:19:10.536 }, 00:19:10.536 "base_bdevs_list": [ 00:19:10.536 { 00:19:10.536 "name": "spare", 00:19:10.536 "uuid": "755d55d6-b70c-5d7f-a65e-9bbc7feee490", 00:19:10.536 "is_configured": true, 00:19:10.536 "data_offset": 2048, 00:19:10.536 "data_size": 63488 00:19:10.536 }, 00:19:10.536 { 00:19:10.536 "name": "BaseBdev2", 00:19:10.536 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:10.536 "is_configured": true, 00:19:10.536 "data_offset": 2048, 00:19:10.536 "data_size": 63488 00:19:10.536 } 00:19:10.536 ] 00:19:10.536 }' 00:19:10.536 18:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:10.536 18:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.536 18:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:10.536 18:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.536 18:22:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:11.105 [2024-07-24 18:22:19.422286] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:11.105 [2024-07-24 18:22:19.422327] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:11.105 [2024-07-24 18:22:19.422385] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.673 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:11.673 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.673 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:11.673 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:11.673 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:11.673 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:11.673 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.673 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.673 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:11.673 "name": "raid_bdev1", 00:19:11.673 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:11.673 "strip_size_kb": 0, 00:19:11.673 "state": "online", 00:19:11.673 "raid_level": "raid1", 00:19:11.673 "superblock": true, 00:19:11.673 "num_base_bdevs": 2, 00:19:11.673 "num_base_bdevs_discovered": 2, 00:19:11.673 "num_base_bdevs_operational": 2, 00:19:11.673 "base_bdevs_list": [ 00:19:11.673 { 00:19:11.673 "name": "spare", 00:19:11.674 "uuid": "755d55d6-b70c-5d7f-a65e-9bbc7feee490", 00:19:11.674 "is_configured": true, 00:19:11.674 "data_offset": 2048, 00:19:11.674 "data_size": 63488 00:19:11.674 }, 00:19:11.674 { 00:19:11.674 "name": "BaseBdev2", 00:19:11.674 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:11.674 "is_configured": true, 00:19:11.674 "data_offset": 2048, 00:19:11.674 "data_size": 63488 00:19:11.674 } 00:19:11.674 ] 00:19:11.674 }' 00:19:11.674 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:11.932 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:11.932 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:11.932 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:11.932 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:19:11.932 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:11.932 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:11.932 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:11.932 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:11.932 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:11.932 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.932 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.932 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:11.932 "name": "raid_bdev1", 00:19:11.932 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:11.932 "strip_size_kb": 0, 00:19:11.932 "state": "online", 00:19:11.932 "raid_level": "raid1", 00:19:11.932 "superblock": true, 00:19:11.932 "num_base_bdevs": 2, 00:19:11.932 "num_base_bdevs_discovered": 2, 00:19:11.932 "num_base_bdevs_operational": 2, 00:19:11.932 "base_bdevs_list": [ 00:19:11.932 { 00:19:11.932 "name": "spare", 00:19:11.932 "uuid": "755d55d6-b70c-5d7f-a65e-9bbc7feee490", 00:19:11.932 "is_configured": true, 00:19:11.932 "data_offset": 2048, 00:19:11.933 "data_size": 63488 00:19:11.933 }, 00:19:11.933 { 00:19:11.933 "name": "BaseBdev2", 00:19:11.933 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:11.933 "is_configured": true, 00:19:11.933 "data_offset": 2048, 00:19:11.933 "data_size": 63488 00:19:11.933 } 00:19:11.933 ] 00:19:11.933 }' 00:19:11.933 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:12.251 "name": "raid_bdev1", 00:19:12.251 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:12.251 "strip_size_kb": 0, 00:19:12.251 "state": "online", 00:19:12.251 "raid_level": "raid1", 00:19:12.251 "superblock": true, 00:19:12.251 "num_base_bdevs": 2, 00:19:12.251 "num_base_bdevs_discovered": 2, 00:19:12.251 "num_base_bdevs_operational": 2, 00:19:12.251 "base_bdevs_list": [ 00:19:12.251 { 00:19:12.251 "name": "spare", 00:19:12.251 "uuid": "755d55d6-b70c-5d7f-a65e-9bbc7feee490", 00:19:12.251 "is_configured": true, 00:19:12.251 "data_offset": 2048, 00:19:12.251 "data_size": 63488 00:19:12.251 }, 00:19:12.251 { 00:19:12.251 "name": "BaseBdev2", 00:19:12.251 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:12.251 "is_configured": true, 00:19:12.251 "data_offset": 2048, 00:19:12.251 "data_size": 63488 00:19:12.251 } 00:19:12.251 ] 00:19:12.251 }' 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:12.251 18:22:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.820 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:12.820 [2024-07-24 18:22:21.359000] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.820 [2024-07-24 18:22:21.359018] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:12.820 [2024-07-24 18:22:21.359063] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.820 [2024-07-24 18:22:21.359104] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.820 [2024-07-24 18:22:21.359111] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14c2030 name raid_bdev1, state offline 00:19:12.820 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.820 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:19:13.079 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:19:13.079 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:19:13.079 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:13.079 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:13.079 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:13.079 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:13.079 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:13.079 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:13.079 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:13.079 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:13.079 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:13.079 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:13.079 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:13.338 /dev/nbd0 00:19:13.338 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:13.338 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:13.338 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:13.338 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:13.338 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:13.339 1+0 records in 00:19:13.339 1+0 records out 00:19:13.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269039 s, 15.2 MB/s 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:13.339 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:13.599 /dev/nbd1 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:13.599 1+0 records in 00:19:13.599 1+0 records out 00:19:13.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242103 s, 16.9 MB/s 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:13.599 18:22:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:13.599 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:13.599 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:13.599 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:13.599 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:13.599 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:13.599 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.599 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:19:13.859 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:14.118 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:14.378 [2024-07-24 18:22:22.775357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:14.378 [2024-07-24 18:22:22.775391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.378 [2024-07-24 18:22:22.775405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1662fb0 00:19:14.378 [2024-07-24 18:22:22.775414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.378 [2024-07-24 18:22:22.776576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.378 [2024-07-24 18:22:22.776599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:14.378 [2024-07-24 18:22:22.776675] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:14.378 [2024-07-24 18:22:22.776694] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:14.378 [2024-07-24 18:22:22.776748] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.378 spare 00:19:14.378 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:14.378 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:14.378 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:14.378 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:14.378 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:14.378 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:14.378 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:14.378 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:14.378 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:14.378 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:14.378 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.378 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.378 [2024-07-24 18:22:22.877055] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1561070 00:19:14.378 [2024-07-24 18:22:22.877068] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:14.378 [2024-07-24 18:22:22.877198] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14c0e20 00:19:14.378 [2024-07-24 18:22:22.877306] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1561070 00:19:14.378 [2024-07-24 18:22:22.877313] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1561070 00:19:14.378 [2024-07-24 18:22:22.877386] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.638 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.638 "name": "raid_bdev1", 00:19:14.638 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:14.638 "strip_size_kb": 0, 00:19:14.638 "state": "online", 00:19:14.638 "raid_level": "raid1", 00:19:14.638 "superblock": true, 00:19:14.638 "num_base_bdevs": 2, 00:19:14.638 "num_base_bdevs_discovered": 2, 00:19:14.638 "num_base_bdevs_operational": 2, 00:19:14.638 "base_bdevs_list": [ 00:19:14.638 { 00:19:14.638 "name": "spare", 00:19:14.638 "uuid": "755d55d6-b70c-5d7f-a65e-9bbc7feee490", 00:19:14.638 "is_configured": true, 00:19:14.638 "data_offset": 2048, 00:19:14.638 "data_size": 63488 00:19:14.638 }, 00:19:14.638 { 00:19:14.638 "name": "BaseBdev2", 00:19:14.638 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:14.638 "is_configured": true, 00:19:14.638 "data_offset": 2048, 00:19:14.638 "data_size": 63488 00:19:14.638 } 00:19:14.638 ] 00:19:14.638 }' 00:19:14.638 18:22:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.638 18:22:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.897 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:14.897 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:14.897 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:14.897 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:14.897 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:14.897 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.898 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.157 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:15.157 "name": "raid_bdev1", 00:19:15.157 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:15.157 "strip_size_kb": 0, 00:19:15.157 "state": "online", 00:19:15.157 "raid_level": "raid1", 00:19:15.157 "superblock": true, 00:19:15.157 "num_base_bdevs": 2, 00:19:15.157 "num_base_bdevs_discovered": 2, 00:19:15.157 "num_base_bdevs_operational": 2, 00:19:15.157 "base_bdevs_list": [ 00:19:15.157 { 00:19:15.157 "name": "spare", 00:19:15.157 "uuid": "755d55d6-b70c-5d7f-a65e-9bbc7feee490", 00:19:15.157 "is_configured": true, 00:19:15.157 "data_offset": 2048, 00:19:15.157 "data_size": 63488 00:19:15.157 }, 00:19:15.157 { 00:19:15.157 "name": "BaseBdev2", 00:19:15.157 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:15.157 "is_configured": true, 00:19:15.157 "data_offset": 2048, 00:19:15.157 "data_size": 63488 00:19:15.157 } 00:19:15.157 ] 00:19:15.157 }' 00:19:15.157 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:15.157 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:15.157 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:15.157 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:15.157 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.157 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:15.416 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.416 18:22:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:15.674 [2024-07-24 18:22:24.030645] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.674 "name": "raid_bdev1", 00:19:15.674 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:15.674 "strip_size_kb": 0, 00:19:15.674 "state": "online", 00:19:15.674 "raid_level": "raid1", 00:19:15.674 "superblock": true, 00:19:15.674 "num_base_bdevs": 2, 00:19:15.674 "num_base_bdevs_discovered": 1, 00:19:15.674 "num_base_bdevs_operational": 1, 00:19:15.674 "base_bdevs_list": [ 00:19:15.674 { 00:19:15.674 "name": null, 00:19:15.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.674 "is_configured": false, 00:19:15.674 "data_offset": 2048, 00:19:15.674 "data_size": 63488 00:19:15.674 }, 00:19:15.674 { 00:19:15.674 "name": "BaseBdev2", 00:19:15.674 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:15.674 "is_configured": true, 00:19:15.674 "data_offset": 2048, 00:19:15.674 "data_size": 63488 00:19:15.674 } 00:19:15.674 ] 00:19:15.674 }' 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.674 18:22:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.241 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:16.500 [2024-07-24 18:22:24.848761] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.500 [2024-07-24 18:22:24.848870] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:16.500 [2024-07-24 18:22:24.848881] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:16.500 [2024-07-24 18:22:24.848901] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.500 [2024-07-24 18:22:24.853163] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11c6fb0 00:19:16.500 [2024-07-24 18:22:24.854775] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:16.500 18:22:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:19:17.437 18:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.437 18:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:17.437 18:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:17.437 18:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:17.437 18:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:17.437 18:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.437 18:22:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.696 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:17.696 "name": "raid_bdev1", 00:19:17.696 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:17.696 "strip_size_kb": 0, 00:19:17.696 "state": "online", 00:19:17.696 "raid_level": "raid1", 00:19:17.696 "superblock": true, 00:19:17.696 "num_base_bdevs": 2, 00:19:17.696 "num_base_bdevs_discovered": 2, 00:19:17.696 "num_base_bdevs_operational": 2, 00:19:17.696 "process": { 00:19:17.696 "type": "rebuild", 00:19:17.696 "target": "spare", 00:19:17.696 "progress": { 00:19:17.696 "blocks": 22528, 00:19:17.696 "percent": 35 00:19:17.696 } 00:19:17.696 }, 00:19:17.696 "base_bdevs_list": [ 00:19:17.696 { 00:19:17.696 "name": "spare", 00:19:17.696 "uuid": "755d55d6-b70c-5d7f-a65e-9bbc7feee490", 00:19:17.696 "is_configured": true, 00:19:17.696 "data_offset": 2048, 00:19:17.696 "data_size": 63488 00:19:17.696 }, 00:19:17.696 { 00:19:17.696 "name": "BaseBdev2", 00:19:17.696 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:17.696 "is_configured": true, 00:19:17.696 "data_offset": 2048, 00:19:17.696 "data_size": 63488 00:19:17.696 } 00:19:17.696 ] 00:19:17.696 }' 00:19:17.696 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:17.696 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.696 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:17.696 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.696 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:17.955 [2024-07-24 18:22:26.301758] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.955 [2024-07-24 18:22:26.365140] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:17.955 [2024-07-24 18:22:26.365170] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.955 [2024-07-24 18:22:26.365180] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.955 [2024-07-24 18:22:26.365185] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:17.955 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:17.955 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:17.955 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:17.955 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:17.955 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:17.955 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:17.956 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.956 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.956 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.956 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:17.956 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.956 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.215 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.215 "name": "raid_bdev1", 00:19:18.215 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:18.215 "strip_size_kb": 0, 00:19:18.215 "state": "online", 00:19:18.215 "raid_level": "raid1", 00:19:18.215 "superblock": true, 00:19:18.215 "num_base_bdevs": 2, 00:19:18.215 "num_base_bdevs_discovered": 1, 00:19:18.215 "num_base_bdevs_operational": 1, 00:19:18.215 "base_bdevs_list": [ 00:19:18.215 { 00:19:18.215 "name": null, 00:19:18.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.215 "is_configured": false, 00:19:18.215 "data_offset": 2048, 00:19:18.215 "data_size": 63488 00:19:18.215 }, 00:19:18.215 { 00:19:18.215 "name": "BaseBdev2", 00:19:18.215 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:18.215 "is_configured": true, 00:19:18.215 "data_offset": 2048, 00:19:18.215 "data_size": 63488 00:19:18.215 } 00:19:18.215 ] 00:19:18.215 }' 00:19:18.215 18:22:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.215 18:22:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.781 18:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:18.781 [2024-07-24 18:22:27.223312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:18.781 [2024-07-24 18:22:27.223356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.781 [2024-07-24 18:22:27.223371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14c0ca0 00:19:18.781 [2024-07-24 18:22:27.223380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.781 [2024-07-24 18:22:27.223667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.781 [2024-07-24 18:22:27.223681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:18.781 [2024-07-24 18:22:27.223740] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:18.781 [2024-07-24 18:22:27.223748] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:18.781 [2024-07-24 18:22:27.223755] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:18.781 [2024-07-24 18:22:27.223768] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.781 [2024-07-24 18:22:27.228068] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1664ff0 00:19:18.781 [2024-07-24 18:22:27.229106] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:18.781 spare 00:19:18.781 18:22:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:19:19.717 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.718 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:19.718 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:19.718 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:19.718 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:19.718 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.718 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.977 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:19.977 "name": "raid_bdev1", 00:19:19.977 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:19.977 "strip_size_kb": 0, 00:19:19.977 "state": "online", 00:19:19.977 "raid_level": "raid1", 00:19:19.977 "superblock": true, 00:19:19.977 "num_base_bdevs": 2, 00:19:19.977 "num_base_bdevs_discovered": 2, 00:19:19.977 "num_base_bdevs_operational": 2, 00:19:19.977 "process": { 00:19:19.977 "type": "rebuild", 00:19:19.977 "target": "spare", 00:19:19.977 "progress": { 00:19:19.977 "blocks": 22528, 00:19:19.977 "percent": 35 00:19:19.977 } 00:19:19.977 }, 00:19:19.977 "base_bdevs_list": [ 00:19:19.977 { 00:19:19.977 "name": "spare", 00:19:19.977 "uuid": "755d55d6-b70c-5d7f-a65e-9bbc7feee490", 00:19:19.977 "is_configured": true, 00:19:19.977 "data_offset": 2048, 00:19:19.977 "data_size": 63488 00:19:19.977 }, 00:19:19.977 { 00:19:19.977 "name": "BaseBdev2", 00:19:19.977 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:19.977 "is_configured": true, 00:19:19.977 "data_offset": 2048, 00:19:19.977 "data_size": 63488 00:19:19.977 } 00:19:19.977 ] 00:19:19.977 }' 00:19:19.977 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:19.977 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.977 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:19.977 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.977 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:20.236 [2024-07-24 18:22:28.671683] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.236 [2024-07-24 18:22:28.739420] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:20.236 [2024-07-24 18:22:28.739454] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.236 [2024-07-24 18:22:28.739464] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.236 [2024-07-24 18:22:28.739474] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:20.236 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.236 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:20.236 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:20.236 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:20.236 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:20.236 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:20.236 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.236 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.236 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.236 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.236 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.236 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.495 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:20.495 "name": "raid_bdev1", 00:19:20.495 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:20.495 "strip_size_kb": 0, 00:19:20.495 "state": "online", 00:19:20.495 "raid_level": "raid1", 00:19:20.495 "superblock": true, 00:19:20.495 "num_base_bdevs": 2, 00:19:20.495 "num_base_bdevs_discovered": 1, 00:19:20.495 "num_base_bdevs_operational": 1, 00:19:20.495 "base_bdevs_list": [ 00:19:20.495 { 00:19:20.495 "name": null, 00:19:20.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.495 "is_configured": false, 00:19:20.495 "data_offset": 2048, 00:19:20.495 "data_size": 63488 00:19:20.495 }, 00:19:20.495 { 00:19:20.495 "name": "BaseBdev2", 00:19:20.495 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:20.495 "is_configured": true, 00:19:20.495 "data_offset": 2048, 00:19:20.495 "data_size": 63488 00:19:20.495 } 00:19:20.495 ] 00:19:20.495 }' 00:19:20.495 18:22:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:20.495 18:22:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.063 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:21.063 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:21.063 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:21.063 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:21.063 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:21.063 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.063 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.063 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:21.063 "name": "raid_bdev1", 00:19:21.063 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:21.063 "strip_size_kb": 0, 00:19:21.063 "state": "online", 00:19:21.063 "raid_level": "raid1", 00:19:21.063 "superblock": true, 00:19:21.063 "num_base_bdevs": 2, 00:19:21.063 "num_base_bdevs_discovered": 1, 00:19:21.063 "num_base_bdevs_operational": 1, 00:19:21.063 "base_bdevs_list": [ 00:19:21.063 { 00:19:21.063 "name": null, 00:19:21.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.063 "is_configured": false, 00:19:21.063 "data_offset": 2048, 00:19:21.063 "data_size": 63488 00:19:21.063 }, 00:19:21.063 { 00:19:21.063 "name": "BaseBdev2", 00:19:21.063 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:21.063 "is_configured": true, 00:19:21.063 "data_offset": 2048, 00:19:21.063 "data_size": 63488 00:19:21.063 } 00:19:21.063 ] 00:19:21.063 }' 00:19:21.063 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:21.323 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:21.323 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:21.323 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:21.323 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:21.323 18:22:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:21.582 [2024-07-24 18:22:30.030653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:21.582 [2024-07-24 18:22:30.030688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.582 [2024-07-24 18:22:30.030702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14c4d90 00:19:21.582 [2024-07-24 18:22:30.030711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.582 [2024-07-24 18:22:30.030966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.582 [2024-07-24 18:22:30.030978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:21.582 [2024-07-24 18:22:30.031024] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:21.583 [2024-07-24 18:22:30.031032] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:21.583 [2024-07-24 18:22:30.031039] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:21.583 BaseBdev1 00:19:21.583 18:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:19:22.518 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.518 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:22.518 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:22.518 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:22.518 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:22.518 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:22.518 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.518 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.518 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.518 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.519 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.519 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.778 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.778 "name": "raid_bdev1", 00:19:22.778 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:22.778 "strip_size_kb": 0, 00:19:22.778 "state": "online", 00:19:22.778 "raid_level": "raid1", 00:19:22.778 "superblock": true, 00:19:22.778 "num_base_bdevs": 2, 00:19:22.778 "num_base_bdevs_discovered": 1, 00:19:22.778 "num_base_bdevs_operational": 1, 00:19:22.778 "base_bdevs_list": [ 00:19:22.778 { 00:19:22.778 "name": null, 00:19:22.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.778 "is_configured": false, 00:19:22.778 "data_offset": 2048, 00:19:22.778 "data_size": 63488 00:19:22.778 }, 00:19:22.778 { 00:19:22.778 "name": "BaseBdev2", 00:19:22.778 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:22.778 "is_configured": true, 00:19:22.778 "data_offset": 2048, 00:19:22.778 "data_size": 63488 00:19:22.778 } 00:19:22.778 ] 00:19:22.778 }' 00:19:22.778 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.778 18:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:23.390 "name": "raid_bdev1", 00:19:23.390 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:23.390 "strip_size_kb": 0, 00:19:23.390 "state": "online", 00:19:23.390 "raid_level": "raid1", 00:19:23.390 "superblock": true, 00:19:23.390 "num_base_bdevs": 2, 00:19:23.390 "num_base_bdevs_discovered": 1, 00:19:23.390 "num_base_bdevs_operational": 1, 00:19:23.390 "base_bdevs_list": [ 00:19:23.390 { 00:19:23.390 "name": null, 00:19:23.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.390 "is_configured": false, 00:19:23.390 "data_offset": 2048, 00:19:23.390 "data_size": 63488 00:19:23.390 }, 00:19:23.390 { 00:19:23.390 "name": "BaseBdev2", 00:19:23.390 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:23.390 "is_configured": true, 00:19:23.390 "data_offset": 2048, 00:19:23.390 "data_size": 63488 00:19:23.390 } 00:19:23.390 ] 00:19:23.390 }' 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:23.390 18:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.650 [2024-07-24 18:22:32.104009] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.650 [2024-07-24 18:22:32.104110] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:23.650 [2024-07-24 18:22:32.104121] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:23.650 request: 00:19:23.650 { 00:19:23.650 "base_bdev": "BaseBdev1", 00:19:23.650 "raid_bdev": "raid_bdev1", 00:19:23.650 "method": "bdev_raid_add_base_bdev", 00:19:23.650 "req_id": 1 00:19:23.650 } 00:19:23.650 Got JSON-RPC error response 00:19:23.650 response: 00:19:23.650 { 00:19:23.650 "code": -22, 00:19:23.650 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:23.650 } 00:19:23.650 18:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:19:23.650 18:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:23.650 18:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:23.650 18:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:23.650 18:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:19:24.588 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:24.588 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:24.588 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:24.588 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:24.588 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:24.588 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:24.588 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:24.588 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:24.588 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:24.588 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:24.588 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.588 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.847 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:24.847 "name": "raid_bdev1", 00:19:24.847 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:24.847 "strip_size_kb": 0, 00:19:24.847 "state": "online", 00:19:24.847 "raid_level": "raid1", 00:19:24.847 "superblock": true, 00:19:24.847 "num_base_bdevs": 2, 00:19:24.847 "num_base_bdevs_discovered": 1, 00:19:24.847 "num_base_bdevs_operational": 1, 00:19:24.847 "base_bdevs_list": [ 00:19:24.847 { 00:19:24.847 "name": null, 00:19:24.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.847 "is_configured": false, 00:19:24.847 "data_offset": 2048, 00:19:24.847 "data_size": 63488 00:19:24.847 }, 00:19:24.847 { 00:19:24.847 "name": "BaseBdev2", 00:19:24.847 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:24.847 "is_configured": true, 00:19:24.847 "data_offset": 2048, 00:19:24.847 "data_size": 63488 00:19:24.847 } 00:19:24.847 ] 00:19:24.847 }' 00:19:24.847 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:24.847 18:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.415 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:25.415 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:25.416 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:25.416 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:25.416 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:25.416 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.416 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.416 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:25.416 "name": "raid_bdev1", 00:19:25.416 "uuid": "ec2b6af2-d386-4e1a-9da9-eccd7f65aad3", 00:19:25.416 "strip_size_kb": 0, 00:19:25.416 "state": "online", 00:19:25.416 "raid_level": "raid1", 00:19:25.416 "superblock": true, 00:19:25.416 "num_base_bdevs": 2, 00:19:25.416 "num_base_bdevs_discovered": 1, 00:19:25.416 "num_base_bdevs_operational": 1, 00:19:25.416 "base_bdevs_list": [ 00:19:25.416 { 00:19:25.416 "name": null, 00:19:25.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.416 "is_configured": false, 00:19:25.416 "data_offset": 2048, 00:19:25.416 "data_size": 63488 00:19:25.416 }, 00:19:25.416 { 00:19:25.416 "name": "BaseBdev2", 00:19:25.416 "uuid": "9c0b8347-5c8d-522f-968b-f865b7582990", 00:19:25.416 "is_configured": true, 00:19:25.416 "data_offset": 2048, 00:19:25.416 "data_size": 63488 00:19:25.416 } 00:19:25.416 ] 00:19:25.416 }' 00:19:25.416 18:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2260463 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2260463 ']' 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 2260463 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2260463 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2260463' 00:19:25.675 killing process with pid 2260463 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 2260463 00:19:25.675 Received shutdown signal, test time was about 60.000000 seconds 00:19:25.675 00:19:25.675 Latency(us) 00:19:25.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.675 =================================================================================================================== 00:19:25.675 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.675 [2024-07-24 18:22:34.111813] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.675 [2024-07-24 18:22:34.111881] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.675 [2024-07-24 18:22:34.111912] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.675 [2024-07-24 18:22:34.111919] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1561070 name raid_bdev1, state offline 00:19:25.675 18:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 2260463 00:19:25.675 [2024-07-24 18:22:34.136090] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:19:25.935 00:19:25.935 real 0m29.012s 00:19:25.935 user 0m40.894s 00:19:25.935 sys 0m5.389s 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.935 ************************************ 00:19:25.935 END TEST raid_rebuild_test_sb 00:19:25.935 ************************************ 00:19:25.935 18:22:34 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:19:25.935 18:22:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:25.935 18:22:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:25.935 18:22:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:25.935 ************************************ 00:19:25.935 START TEST raid_rebuild_test_io 00:19:25.935 ************************************ 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:19:25.935 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:19:25.936 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:19:25.936 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:19:25.936 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2265988 00:19:25.936 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2265988 /var/tmp/spdk-raid.sock 00:19:25.936 18:22:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:25.936 18:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 2265988 ']' 00:19:25.936 18:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:25.936 18:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.936 18:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:25.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:25.936 18:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.936 18:22:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:25.936 [2024-07-24 18:22:34.454457] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:19:25.936 [2024-07-24 18:22:34.454504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265988 ] 00:19:25.936 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:25.936 Zero copy mechanism will not be used. 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:01.0 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:01.1 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:01.2 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:01.3 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:01.4 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:01.5 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:01.6 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:01.7 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:02.0 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:02.1 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:02.2 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:02.3 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:02.4 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:02.5 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:02.6 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b3:02.7 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:01.0 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:01.1 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:01.2 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:01.3 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:01.4 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:01.5 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:01.6 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:01.7 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:02.0 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:02.1 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:02.2 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:02.3 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:02.4 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:02.5 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:02.6 cannot be used 00:19:25.936 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:25.936 EAL: Requested device 0000:b5:02.7 cannot be used 00:19:26.195 [2024-07-24 18:22:34.546695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.195 [2024-07-24 18:22:34.616211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.195 [2024-07-24 18:22:34.665305] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.195 [2024-07-24 18:22:34.665333] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.765 18:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.765 18:22:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:19:26.765 18:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:26.765 18:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:27.025 BaseBdev1_malloc 00:19:27.025 18:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:27.025 [2024-07-24 18:22:35.592796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:27.025 [2024-07-24 18:22:35.592836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.025 [2024-07-24 18:22:35.592852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d9b370 00:19:27.025 [2024-07-24 18:22:35.592860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.025 [2024-07-24 18:22:35.594037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.025 [2024-07-24 18:22:35.594061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:27.025 BaseBdev1 00:19:27.025 18:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:27.025 18:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:27.285 BaseBdev2_malloc 00:19:27.285 18:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:27.544 [2024-07-24 18:22:35.925205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:27.544 [2024-07-24 18:22:35.925239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.544 [2024-07-24 18:22:35.925251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f3ee70 00:19:27.544 [2024-07-24 18:22:35.925259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.544 [2024-07-24 18:22:35.926220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.544 [2024-07-24 18:22:35.926242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:27.544 BaseBdev2 00:19:27.544 18:22:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:27.544 spare_malloc 00:19:27.544 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:27.804 spare_delay 00:19:27.804 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:28.063 [2024-07-24 18:22:36.433816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:28.063 [2024-07-24 18:22:36.433843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.063 [2024-07-24 18:22:36.433855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f3e4b0 00:19:28.063 [2024-07-24 18:22:36.433863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.063 [2024-07-24 18:22:36.434736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.063 [2024-07-24 18:22:36.434754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:28.063 spare 00:19:28.063 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:28.063 [2024-07-24 18:22:36.602265] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.063 [2024-07-24 18:22:36.602991] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:28.063 [2024-07-24 18:22:36.603036] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d93030 00:19:28.063 [2024-07-24 18:22:36.603042] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:28.063 [2024-07-24 18:22:36.603154] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f3f100 00:19:28.063 [2024-07-24 18:22:36.603240] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d93030 00:19:28.063 [2024-07-24 18:22:36.603249] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d93030 00:19:28.063 [2024-07-24 18:22:36.603313] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.063 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:28.063 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:28.063 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:28.063 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:28.063 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:28.063 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:28.063 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.063 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.063 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.063 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.063 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.063 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.322 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:28.322 "name": "raid_bdev1", 00:19:28.322 "uuid": "1b116051-0601-48fe-8638-79edd76c80e2", 00:19:28.322 "strip_size_kb": 0, 00:19:28.322 "state": "online", 00:19:28.322 "raid_level": "raid1", 00:19:28.322 "superblock": false, 00:19:28.322 "num_base_bdevs": 2, 00:19:28.322 "num_base_bdevs_discovered": 2, 00:19:28.322 "num_base_bdevs_operational": 2, 00:19:28.322 "base_bdevs_list": [ 00:19:28.322 { 00:19:28.322 "name": "BaseBdev1", 00:19:28.322 "uuid": "514e60bc-6fcb-5e0f-b880-25cd6866e0d2", 00:19:28.322 "is_configured": true, 00:19:28.322 "data_offset": 0, 00:19:28.322 "data_size": 65536 00:19:28.322 }, 00:19:28.322 { 00:19:28.322 "name": "BaseBdev2", 00:19:28.322 "uuid": "af04bae3-2221-5558-98f3-26ae506b5b0f", 00:19:28.322 "is_configured": true, 00:19:28.322 "data_offset": 0, 00:19:28.322 "data_size": 65536 00:19:28.322 } 00:19:28.322 ] 00:19:28.322 }' 00:19:28.322 18:22:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:28.322 18:22:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:28.889 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:28.889 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:19:28.889 [2024-07-24 18:22:37.460636] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.889 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:19:29.148 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.148 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:29.148 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:19:29.148 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:19:29.148 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:29.148 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:29.148 [2024-07-24 18:22:37.726992] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f33630 00:19:29.148 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:29.148 Zero copy mechanism will not be used. 00:19:29.148 Running I/O for 60 seconds... 00:19:29.407 [2024-07-24 18:22:37.822456] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:29.407 [2024-07-24 18:22:37.822623] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1f33630 00:19:29.407 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:29.407 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:29.407 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:29.407 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:29.407 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:29.407 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:29.407 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:29.407 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:29.407 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:29.407 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:29.407 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.407 18:22:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.665 18:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.665 "name": "raid_bdev1", 00:19:29.665 "uuid": "1b116051-0601-48fe-8638-79edd76c80e2", 00:19:29.665 "strip_size_kb": 0, 00:19:29.665 "state": "online", 00:19:29.665 "raid_level": "raid1", 00:19:29.665 "superblock": false, 00:19:29.665 "num_base_bdevs": 2, 00:19:29.665 "num_base_bdevs_discovered": 1, 00:19:29.665 "num_base_bdevs_operational": 1, 00:19:29.665 "base_bdevs_list": [ 00:19:29.665 { 00:19:29.665 "name": null, 00:19:29.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.665 "is_configured": false, 00:19:29.665 "data_offset": 0, 00:19:29.665 "data_size": 65536 00:19:29.665 }, 00:19:29.665 { 00:19:29.665 "name": "BaseBdev2", 00:19:29.665 "uuid": "af04bae3-2221-5558-98f3-26ae506b5b0f", 00:19:29.665 "is_configured": true, 00:19:29.665 "data_offset": 0, 00:19:29.665 "data_size": 65536 00:19:29.665 } 00:19:29.665 ] 00:19:29.665 }' 00:19:29.665 18:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.665 18:22:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.923 18:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:30.181 [2024-07-24 18:22:38.661794] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:30.181 18:22:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:19:30.181 [2024-07-24 18:22:38.711257] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f326c0 00:19:30.181 [2024-07-24 18:22:38.712953] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:30.440 [2024-07-24 18:22:38.835874] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:30.440 [2024-07-24 18:22:38.836215] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:30.698 [2024-07-24 18:22:39.048808] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:30.698 [2024-07-24 18:22:39.048904] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:30.956 [2024-07-24 18:22:39.391585] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:30.956 [2024-07-24 18:22:39.514064] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:30.956 [2024-07-24 18:22:39.514225] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:31.214 18:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.214 18:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:31.214 18:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:31.214 18:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:31.214 18:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:31.214 18:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.214 18:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.473 [2024-07-24 18:22:39.828509] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:31.473 18:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:31.473 "name": "raid_bdev1", 00:19:31.473 "uuid": "1b116051-0601-48fe-8638-79edd76c80e2", 00:19:31.473 "strip_size_kb": 0, 00:19:31.473 "state": "online", 00:19:31.473 "raid_level": "raid1", 00:19:31.473 "superblock": false, 00:19:31.473 "num_base_bdevs": 2, 00:19:31.473 "num_base_bdevs_discovered": 2, 00:19:31.473 "num_base_bdevs_operational": 2, 00:19:31.473 "process": { 00:19:31.473 "type": "rebuild", 00:19:31.473 "target": "spare", 00:19:31.473 "progress": { 00:19:31.473 "blocks": 14336, 00:19:31.473 "percent": 21 00:19:31.473 } 00:19:31.473 }, 00:19:31.473 "base_bdevs_list": [ 00:19:31.473 { 00:19:31.473 "name": "spare", 00:19:31.473 "uuid": "60e0c808-b0bb-54d6-9902-c46782b861c7", 00:19:31.473 "is_configured": true, 00:19:31.473 "data_offset": 0, 00:19:31.473 "data_size": 65536 00:19:31.473 }, 00:19:31.473 { 00:19:31.473 "name": "BaseBdev2", 00:19:31.473 "uuid": "af04bae3-2221-5558-98f3-26ae506b5b0f", 00:19:31.473 "is_configured": true, 00:19:31.473 "data_offset": 0, 00:19:31.473 "data_size": 65536 00:19:31.473 } 00:19:31.473 ] 00:19:31.473 }' 00:19:31.473 18:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:31.473 18:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:31.473 18:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:31.473 18:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.473 18:22:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:31.473 [2024-07-24 18:22:40.047407] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:31.473 [2024-07-24 18:22:40.047638] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:31.732 [2024-07-24 18:22:40.124277] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.732 [2024-07-24 18:22:40.160776] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:31.732 [2024-07-24 18:22:40.272534] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:31.732 [2024-07-24 18:22:40.273837] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.732 [2024-07-24 18:22:40.273857] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.732 [2024-07-24 18:22:40.273864] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:31.732 [2024-07-24 18:22:40.289246] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1f33630 00:19:31.732 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:31.732 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:31.732 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:31.732 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:31.732 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:31.732 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:31.732 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:31.732 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:31.732 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:31.732 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:31.732 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.732 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.990 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:31.990 "name": "raid_bdev1", 00:19:31.990 "uuid": "1b116051-0601-48fe-8638-79edd76c80e2", 00:19:31.990 "strip_size_kb": 0, 00:19:31.990 "state": "online", 00:19:31.990 "raid_level": "raid1", 00:19:31.990 "superblock": false, 00:19:31.990 "num_base_bdevs": 2, 00:19:31.990 "num_base_bdevs_discovered": 1, 00:19:31.990 "num_base_bdevs_operational": 1, 00:19:31.990 "base_bdevs_list": [ 00:19:31.990 { 00:19:31.990 "name": null, 00:19:31.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.990 "is_configured": false, 00:19:31.990 "data_offset": 0, 00:19:31.990 "data_size": 65536 00:19:31.990 }, 00:19:31.990 { 00:19:31.990 "name": "BaseBdev2", 00:19:31.990 "uuid": "af04bae3-2221-5558-98f3-26ae506b5b0f", 00:19:31.990 "is_configured": true, 00:19:31.990 "data_offset": 0, 00:19:31.990 "data_size": 65536 00:19:31.990 } 00:19:31.990 ] 00:19:31.990 }' 00:19:31.990 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:31.990 18:22:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:32.555 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:32.555 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:32.555 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:32.556 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:32.556 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:32.556 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.556 18:22:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.556 18:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:32.556 "name": "raid_bdev1", 00:19:32.556 "uuid": "1b116051-0601-48fe-8638-79edd76c80e2", 00:19:32.556 "strip_size_kb": 0, 00:19:32.556 "state": "online", 00:19:32.556 "raid_level": "raid1", 00:19:32.556 "superblock": false, 00:19:32.556 "num_base_bdevs": 2, 00:19:32.556 "num_base_bdevs_discovered": 1, 00:19:32.556 "num_base_bdevs_operational": 1, 00:19:32.556 "base_bdevs_list": [ 00:19:32.556 { 00:19:32.556 "name": null, 00:19:32.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.556 "is_configured": false, 00:19:32.556 "data_offset": 0, 00:19:32.556 "data_size": 65536 00:19:32.556 }, 00:19:32.556 { 00:19:32.556 "name": "BaseBdev2", 00:19:32.556 "uuid": "af04bae3-2221-5558-98f3-26ae506b5b0f", 00:19:32.556 "is_configured": true, 00:19:32.556 "data_offset": 0, 00:19:32.556 "data_size": 65536 00:19:32.556 } 00:19:32.556 ] 00:19:32.556 }' 00:19:32.556 18:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:32.814 18:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:32.814 18:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:32.814 18:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:32.814 18:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:32.814 [2024-07-24 18:22:41.389686] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:33.072 [2024-07-24 18:22:41.418551] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f326c0 00:19:33.072 [2024-07-24 18:22:41.419647] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:33.072 18:22:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:33.072 [2024-07-24 18:22:41.532374] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:33.072 [2024-07-24 18:22:41.532699] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:33.329 [2024-07-24 18:22:41.735328] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:33.329 [2024-07-24 18:22:41.735426] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:33.587 [2024-07-24 18:22:42.049223] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:33.587 [2024-07-24 18:22:42.049451] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:33.846 [2024-07-24 18:22:42.251232] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:33.846 [2024-07-24 18:22:42.251373] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:33.846 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:33.846 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:33.846 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:33.846 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:33.846 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:33.846 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.846 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.104 [2024-07-24 18:22:42.588422] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:34.104 "name": "raid_bdev1", 00:19:34.104 "uuid": "1b116051-0601-48fe-8638-79edd76c80e2", 00:19:34.104 "strip_size_kb": 0, 00:19:34.104 "state": "online", 00:19:34.104 "raid_level": "raid1", 00:19:34.104 "superblock": false, 00:19:34.104 "num_base_bdevs": 2, 00:19:34.104 "num_base_bdevs_discovered": 2, 00:19:34.104 "num_base_bdevs_operational": 2, 00:19:34.104 "process": { 00:19:34.104 "type": "rebuild", 00:19:34.104 "target": "spare", 00:19:34.104 "progress": { 00:19:34.104 "blocks": 12288, 00:19:34.104 "percent": 18 00:19:34.104 } 00:19:34.104 }, 00:19:34.104 "base_bdevs_list": [ 00:19:34.104 { 00:19:34.104 "name": "spare", 00:19:34.104 "uuid": "60e0c808-b0bb-54d6-9902-c46782b861c7", 00:19:34.104 "is_configured": true, 00:19:34.104 "data_offset": 0, 00:19:34.104 "data_size": 65536 00:19:34.104 }, 00:19:34.104 { 00:19:34.104 "name": "BaseBdev2", 00:19:34.104 "uuid": "af04bae3-2221-5558-98f3-26ae506b5b0f", 00:19:34.104 "is_configured": true, 00:19:34.104 "data_offset": 0, 00:19:34.104 "data_size": 65536 00:19:34.104 } 00:19:34.104 ] 00:19:34.104 }' 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=626 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.104 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.363 [2024-07-24 18:22:42.795904] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:34.363 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:34.363 "name": "raid_bdev1", 00:19:34.363 "uuid": "1b116051-0601-48fe-8638-79edd76c80e2", 00:19:34.363 "strip_size_kb": 0, 00:19:34.363 "state": "online", 00:19:34.363 "raid_level": "raid1", 00:19:34.363 "superblock": false, 00:19:34.363 "num_base_bdevs": 2, 00:19:34.363 "num_base_bdevs_discovered": 2, 00:19:34.363 "num_base_bdevs_operational": 2, 00:19:34.363 "process": { 00:19:34.363 "type": "rebuild", 00:19:34.363 "target": "spare", 00:19:34.363 "progress": { 00:19:34.363 "blocks": 16384, 00:19:34.363 "percent": 25 00:19:34.363 } 00:19:34.363 }, 00:19:34.363 "base_bdevs_list": [ 00:19:34.363 { 00:19:34.363 "name": "spare", 00:19:34.363 "uuid": "60e0c808-b0bb-54d6-9902-c46782b861c7", 00:19:34.363 "is_configured": true, 00:19:34.363 "data_offset": 0, 00:19:34.363 "data_size": 65536 00:19:34.363 }, 00:19:34.363 { 00:19:34.363 "name": "BaseBdev2", 00:19:34.363 "uuid": "af04bae3-2221-5558-98f3-26ae506b5b0f", 00:19:34.363 "is_configured": true, 00:19:34.363 "data_offset": 0, 00:19:34.363 "data_size": 65536 00:19:34.363 } 00:19:34.363 ] 00:19:34.363 }' 00:19:34.363 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:34.363 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.363 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:34.363 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.363 18:22:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:34.621 [2024-07-24 18:22:43.110349] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:35.188 [2024-07-24 18:22:43.553479] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:35.446 [2024-07-24 18:22:43.872201] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:35.446 [2024-07-24 18:22:43.872536] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:35.446 18:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:35.446 18:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.446 18:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:35.446 18:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:35.446 18:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:35.446 18:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:35.446 18:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.446 18:22:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.704 18:22:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:35.704 "name": "raid_bdev1", 00:19:35.704 "uuid": "1b116051-0601-48fe-8638-79edd76c80e2", 00:19:35.704 "strip_size_kb": 0, 00:19:35.704 "state": "online", 00:19:35.704 "raid_level": "raid1", 00:19:35.704 "superblock": false, 00:19:35.704 "num_base_bdevs": 2, 00:19:35.704 "num_base_bdevs_discovered": 2, 00:19:35.704 "num_base_bdevs_operational": 2, 00:19:35.704 "process": { 00:19:35.704 "type": "rebuild", 00:19:35.704 "target": "spare", 00:19:35.704 "progress": { 00:19:35.704 "blocks": 34816, 00:19:35.704 "percent": 53 00:19:35.704 } 00:19:35.704 }, 00:19:35.704 "base_bdevs_list": [ 00:19:35.704 { 00:19:35.704 "name": "spare", 00:19:35.704 "uuid": "60e0c808-b0bb-54d6-9902-c46782b861c7", 00:19:35.704 "is_configured": true, 00:19:35.704 "data_offset": 0, 00:19:35.704 "data_size": 65536 00:19:35.704 }, 00:19:35.704 { 00:19:35.704 "name": "BaseBdev2", 00:19:35.704 "uuid": "af04bae3-2221-5558-98f3-26ae506b5b0f", 00:19:35.704 "is_configured": true, 00:19:35.704 "data_offset": 0, 00:19:35.704 "data_size": 65536 00:19:35.704 } 00:19:35.704 ] 00:19:35.704 }' 00:19:35.704 18:22:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:35.704 18:22:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.704 18:22:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:35.704 18:22:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.704 18:22:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:36.639 [2024-07-24 18:22:44.962319] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:19:36.639 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:36.639 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.639 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:36.639 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:36.639 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:36.639 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:36.899 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.899 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.899 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:36.899 "name": "raid_bdev1", 00:19:36.899 "uuid": "1b116051-0601-48fe-8638-79edd76c80e2", 00:19:36.899 "strip_size_kb": 0, 00:19:36.899 "state": "online", 00:19:36.899 "raid_level": "raid1", 00:19:36.899 "superblock": false, 00:19:36.899 "num_base_bdevs": 2, 00:19:36.899 "num_base_bdevs_discovered": 2, 00:19:36.899 "num_base_bdevs_operational": 2, 00:19:36.899 "process": { 00:19:36.899 "type": "rebuild", 00:19:36.899 "target": "spare", 00:19:36.899 "progress": { 00:19:36.899 "blocks": 57344, 00:19:36.899 "percent": 87 00:19:36.899 } 00:19:36.899 }, 00:19:36.899 "base_bdevs_list": [ 00:19:36.899 { 00:19:36.899 "name": "spare", 00:19:36.899 "uuid": "60e0c808-b0bb-54d6-9902-c46782b861c7", 00:19:36.899 "is_configured": true, 00:19:36.899 "data_offset": 0, 00:19:36.899 "data_size": 65536 00:19:36.899 }, 00:19:36.899 { 00:19:36.899 "name": "BaseBdev2", 00:19:36.899 "uuid": "af04bae3-2221-5558-98f3-26ae506b5b0f", 00:19:36.899 "is_configured": true, 00:19:36.899 "data_offset": 0, 00:19:36.899 "data_size": 65536 00:19:36.899 } 00:19:36.899 ] 00:19:36.899 }' 00:19:36.899 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:36.899 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.899 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:36.899 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.899 18:22:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:37.551 [2024-07-24 18:22:45.802836] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:37.551 [2024-07-24 18:22:45.903070] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:37.551 [2024-07-24 18:22:45.903984] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.120 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:38.120 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.120 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:38.120 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:38.120 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:38.120 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:38.120 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.120 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.120 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:38.120 "name": "raid_bdev1", 00:19:38.120 "uuid": "1b116051-0601-48fe-8638-79edd76c80e2", 00:19:38.120 "strip_size_kb": 0, 00:19:38.120 "state": "online", 00:19:38.120 "raid_level": "raid1", 00:19:38.120 "superblock": false, 00:19:38.120 "num_base_bdevs": 2, 00:19:38.120 "num_base_bdevs_discovered": 2, 00:19:38.120 "num_base_bdevs_operational": 2, 00:19:38.120 "base_bdevs_list": [ 00:19:38.120 { 00:19:38.120 "name": "spare", 00:19:38.120 "uuid": "60e0c808-b0bb-54d6-9902-c46782b861c7", 00:19:38.120 "is_configured": true, 00:19:38.120 "data_offset": 0, 00:19:38.120 "data_size": 65536 00:19:38.120 }, 00:19:38.120 { 00:19:38.120 "name": "BaseBdev2", 00:19:38.120 "uuid": "af04bae3-2221-5558-98f3-26ae506b5b0f", 00:19:38.120 "is_configured": true, 00:19:38.120 "data_offset": 0, 00:19:38.120 "data_size": 65536 00:19:38.120 } 00:19:38.120 ] 00:19:38.120 }' 00:19:38.120 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:38.120 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:38.120 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:38.378 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:38.379 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:19:38.379 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.379 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:38.379 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:38.379 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:38.379 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:38.379 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.379 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.379 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:38.379 "name": "raid_bdev1", 00:19:38.379 "uuid": "1b116051-0601-48fe-8638-79edd76c80e2", 00:19:38.379 "strip_size_kb": 0, 00:19:38.379 "state": "online", 00:19:38.379 "raid_level": "raid1", 00:19:38.379 "superblock": false, 00:19:38.379 "num_base_bdevs": 2, 00:19:38.379 "num_base_bdevs_discovered": 2, 00:19:38.379 "num_base_bdevs_operational": 2, 00:19:38.379 "base_bdevs_list": [ 00:19:38.379 { 00:19:38.379 "name": "spare", 00:19:38.379 "uuid": "60e0c808-b0bb-54d6-9902-c46782b861c7", 00:19:38.379 "is_configured": true, 00:19:38.379 "data_offset": 0, 00:19:38.379 "data_size": 65536 00:19:38.379 }, 00:19:38.379 { 00:19:38.379 "name": "BaseBdev2", 00:19:38.379 "uuid": "af04bae3-2221-5558-98f3-26ae506b5b0f", 00:19:38.379 "is_configured": true, 00:19:38.379 "data_offset": 0, 00:19:38.379 "data_size": 65536 00:19:38.379 } 00:19:38.379 ] 00:19:38.379 }' 00:19:38.379 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:38.379 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:38.379 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:38.639 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:38.639 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:38.639 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:38.639 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:38.639 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:38.639 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:38.639 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:38.639 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.639 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.639 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.639 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.639 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.639 18:22:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.639 18:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:38.639 "name": "raid_bdev1", 00:19:38.639 "uuid": "1b116051-0601-48fe-8638-79edd76c80e2", 00:19:38.639 "strip_size_kb": 0, 00:19:38.639 "state": "online", 00:19:38.639 "raid_level": "raid1", 00:19:38.639 "superblock": false, 00:19:38.639 "num_base_bdevs": 2, 00:19:38.639 "num_base_bdevs_discovered": 2, 00:19:38.639 "num_base_bdevs_operational": 2, 00:19:38.639 "base_bdevs_list": [ 00:19:38.639 { 00:19:38.639 "name": "spare", 00:19:38.639 "uuid": "60e0c808-b0bb-54d6-9902-c46782b861c7", 00:19:38.639 "is_configured": true, 00:19:38.639 "data_offset": 0, 00:19:38.639 "data_size": 65536 00:19:38.639 }, 00:19:38.639 { 00:19:38.639 "name": "BaseBdev2", 00:19:38.639 "uuid": "af04bae3-2221-5558-98f3-26ae506b5b0f", 00:19:38.639 "is_configured": true, 00:19:38.639 "data_offset": 0, 00:19:38.639 "data_size": 65536 00:19:38.639 } 00:19:38.639 ] 00:19:38.639 }' 00:19:38.639 18:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.639 18:22:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:39.208 18:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:39.208 [2024-07-24 18:22:47.759693] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.208 [2024-07-24 18:22:47.759718] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.468 00:19:39.468 Latency(us) 00:19:39.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.468 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:39.468 raid_bdev1 : 10.07 117.23 351.68 0.00 0.00 11723.03 239.21 110729.63 00:19:39.468 =================================================================================================================== 00:19:39.468 Total : 117.23 351.68 0.00 0.00 11723.03 239.21 110729.63 00:19:39.468 [2024-07-24 18:22:47.830677] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.468 [2024-07-24 18:22:47.830698] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.468 [2024-07-24 18:22:47.830752] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.468 [2024-07-24 18:22:47.830761] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d93030 name raid_bdev1, state offline 00:19:39.468 0 00:19:39.468 18:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:19:39.468 18:22:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.468 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:19:39.468 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:19:39.468 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:39.468 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:39.468 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:39.468 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:39.468 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:39.468 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:39.468 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:39.468 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:39.468 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:39.468 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:39.468 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:39.728 /dev/nbd0 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.728 1+0 records in 00:19:39.728 1+0 records out 00:19:39.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255603 s, 16.0 MB/s 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:39.728 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:39.988 /dev/nbd1 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.988 1+0 records in 00:19:39.988 1+0 records out 00:19:39.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024954 s, 16.4 MB/s 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:39.988 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:40.248 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2265988 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 2265988 ']' 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 2265988 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2265988 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2265988' 00:19:40.507 killing process with pid 2265988 00:19:40.507 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 2265988 00:19:40.507 Received shutdown signal, test time was about 11.173198 seconds 00:19:40.507 00:19:40.507 Latency(us) 00:19:40.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.507 =================================================================================================================== 00:19:40.508 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.508 [2024-07-24 18:22:48.929091] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:40.508 18:22:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 2265988 00:19:40.508 [2024-07-24 18:22:48.947803] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:19:40.767 00:19:40.767 real 0m14.733s 00:19:40.767 user 0m21.464s 00:19:40.767 sys 0m2.289s 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.767 ************************************ 00:19:40.767 END TEST raid_rebuild_test_io 00:19:40.767 ************************************ 00:19:40.767 18:22:49 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:19:40.767 18:22:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:40.767 18:22:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:40.767 18:22:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.767 ************************************ 00:19:40.767 START TEST raid_rebuild_test_sb_io 00:19:40.767 ************************************ 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2268679 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2268679 /var/tmp/spdk-raid.sock 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 2268679 ']' 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:40.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.767 18:22:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.767 [2024-07-24 18:22:49.271069] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:19:40.767 [2024-07-24 18:22:49.271118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268679 ] 00:19:40.767 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:40.767 Zero copy mechanism will not be used. 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:01.0 cannot be used 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:01.1 cannot be used 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:01.2 cannot be used 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:01.3 cannot be used 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:01.4 cannot be used 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:01.5 cannot be used 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:01.6 cannot be used 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:01.7 cannot be used 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:02.0 cannot be used 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:02.1 cannot be used 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:02.2 cannot be used 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:02.3 cannot be used 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:02.4 cannot be used 00:19:40.767 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.767 EAL: Requested device 0000:b3:02.5 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b3:02.6 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b3:02.7 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:01.0 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:01.1 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:01.2 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:01.3 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:01.4 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:01.5 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:01.6 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:01.7 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:02.0 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:02.1 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:02.2 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:02.3 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:02.4 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:02.5 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:02.6 cannot be used 00:19:40.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:40.768 EAL: Requested device 0000:b5:02.7 cannot be used 00:19:41.027 [2024-07-24 18:22:49.365885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.027 [2024-07-24 18:22:49.435854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.027 [2024-07-24 18:22:49.495216] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.027 [2024-07-24 18:22:49.495248] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.596 18:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.596 18:22:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:19:41.596 18:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:41.596 18:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:41.855 BaseBdev1_malloc 00:19:41.855 18:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:41.855 [2024-07-24 18:22:50.399435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:41.855 [2024-07-24 18:22:50.399477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.855 [2024-07-24 18:22:50.399492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe88370 00:19:41.855 [2024-07-24 18:22:50.399500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.855 [2024-07-24 18:22:50.400571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.855 [2024-07-24 18:22:50.400595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:41.855 BaseBdev1 00:19:41.855 18:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:41.855 18:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:42.114 BaseBdev2_malloc 00:19:42.114 18:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:42.373 [2024-07-24 18:22:50.748019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:42.373 [2024-07-24 18:22:50.748056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.373 [2024-07-24 18:22:50.748069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x102be70 00:19:42.373 [2024-07-24 18:22:50.748078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.373 [2024-07-24 18:22:50.749089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.373 [2024-07-24 18:22:50.749112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:42.373 BaseBdev2 00:19:42.373 18:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:42.373 spare_malloc 00:19:42.373 18:22:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:42.632 spare_delay 00:19:42.632 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:42.892 [2024-07-24 18:22:51.284936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:42.892 [2024-07-24 18:22:51.284967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.892 [2024-07-24 18:22:51.284978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x102b4b0 00:19:42.892 [2024-07-24 18:22:51.284986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.892 [2024-07-24 18:22:51.285883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.892 [2024-07-24 18:22:51.285903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:42.892 spare 00:19:42.892 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:42.892 [2024-07-24 18:22:51.453385] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:42.892 [2024-07-24 18:22:51.454133] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:42.892 [2024-07-24 18:22:51.454241] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xe80030 00:19:42.892 [2024-07-24 18:22:51.454249] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:42.892 [2024-07-24 18:22:51.454360] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x102c100 00:19:42.892 [2024-07-24 18:22:51.454452] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe80030 00:19:42.892 [2024-07-24 18:22:51.454458] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe80030 00:19:42.892 [2024-07-24 18:22:51.454517] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.892 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:42.892 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:42.892 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:42.892 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:42.892 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:42.892 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:42.892 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:42.892 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:42.892 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:42.892 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:42.892 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.892 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.152 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:43.152 "name": "raid_bdev1", 00:19:43.152 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:43.152 "strip_size_kb": 0, 00:19:43.152 "state": "online", 00:19:43.152 "raid_level": "raid1", 00:19:43.152 "superblock": true, 00:19:43.152 "num_base_bdevs": 2, 00:19:43.152 "num_base_bdevs_discovered": 2, 00:19:43.152 "num_base_bdevs_operational": 2, 00:19:43.152 "base_bdevs_list": [ 00:19:43.152 { 00:19:43.152 "name": "BaseBdev1", 00:19:43.152 "uuid": "2db96b08-1759-5d2d-8984-a1c7bc97dbfb", 00:19:43.152 "is_configured": true, 00:19:43.152 "data_offset": 2048, 00:19:43.152 "data_size": 63488 00:19:43.152 }, 00:19:43.152 { 00:19:43.152 "name": "BaseBdev2", 00:19:43.152 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:43.152 "is_configured": true, 00:19:43.152 "data_offset": 2048, 00:19:43.152 "data_size": 63488 00:19:43.152 } 00:19:43.152 ] 00:19:43.152 }' 00:19:43.152 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:43.152 18:22:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:43.720 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:19:43.720 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:43.720 [2024-07-24 18:22:52.299719] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.979 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:19:43.979 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.979 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:43.979 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:19:43.979 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:19:43.979 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:43.979 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:43.979 [2024-07-24 18:22:52.566103] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1021360 00:19:43.979 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:43.979 Zero copy mechanism will not be used. 00:19:43.979 Running I/O for 60 seconds... 00:19:44.238 [2024-07-24 18:22:52.664002] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:44.238 [2024-07-24 18:22:52.674051] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1021360 00:19:44.238 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:44.238 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:44.238 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:44.238 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:44.238 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:44.238 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:44.238 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:44.238 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:44.238 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:44.238 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:44.238 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.238 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.496 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:44.496 "name": "raid_bdev1", 00:19:44.497 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:44.497 "strip_size_kb": 0, 00:19:44.497 "state": "online", 00:19:44.497 "raid_level": "raid1", 00:19:44.497 "superblock": true, 00:19:44.497 "num_base_bdevs": 2, 00:19:44.497 "num_base_bdevs_discovered": 1, 00:19:44.497 "num_base_bdevs_operational": 1, 00:19:44.497 "base_bdevs_list": [ 00:19:44.497 { 00:19:44.497 "name": null, 00:19:44.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.497 "is_configured": false, 00:19:44.497 "data_offset": 2048, 00:19:44.497 "data_size": 63488 00:19:44.497 }, 00:19:44.497 { 00:19:44.497 "name": "BaseBdev2", 00:19:44.497 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:44.497 "is_configured": true, 00:19:44.497 "data_offset": 2048, 00:19:44.497 "data_size": 63488 00:19:44.497 } 00:19:44.497 ] 00:19:44.497 }' 00:19:44.497 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:44.497 18:22:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:45.064 18:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:45.064 [2024-07-24 18:22:53.525182] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:45.064 [2024-07-24 18:22:53.558549] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x101f7a0 00:19:45.064 [2024-07-24 18:22:53.560210] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:45.064 18:22:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:19:45.323 [2024-07-24 18:22:53.667443] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:45.323 [2024-07-24 18:22:53.667822] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:45.323 [2024-07-24 18:22:53.890956] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:45.323 [2024-07-24 18:22:53.891078] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:45.583 [2024-07-24 18:22:54.142363] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:45.842 [2024-07-24 18:22:54.267012] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:46.101 [2024-07-24 18:22:54.481791] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:46.101 18:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.101 18:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:46.101 18:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:46.101 18:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:46.101 18:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:46.101 18:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.101 18:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.101 [2024-07-24 18:22:54.588008] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:46.101 [2024-07-24 18:22:54.588136] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:46.360 18:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:46.360 "name": "raid_bdev1", 00:19:46.360 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:46.360 "strip_size_kb": 0, 00:19:46.360 "state": "online", 00:19:46.360 "raid_level": "raid1", 00:19:46.360 "superblock": true, 00:19:46.360 "num_base_bdevs": 2, 00:19:46.360 "num_base_bdevs_discovered": 2, 00:19:46.360 "num_base_bdevs_operational": 2, 00:19:46.360 "process": { 00:19:46.360 "type": "rebuild", 00:19:46.360 "target": "spare", 00:19:46.360 "progress": { 00:19:46.360 "blocks": 18432, 00:19:46.360 "percent": 29 00:19:46.360 } 00:19:46.360 }, 00:19:46.360 "base_bdevs_list": [ 00:19:46.360 { 00:19:46.360 "name": "spare", 00:19:46.360 "uuid": "2042ca77-6ff4-5625-8ca5-886728e71063", 00:19:46.360 "is_configured": true, 00:19:46.360 "data_offset": 2048, 00:19:46.360 "data_size": 63488 00:19:46.360 }, 00:19:46.360 { 00:19:46.360 "name": "BaseBdev2", 00:19:46.360 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:46.360 "is_configured": true, 00:19:46.360 "data_offset": 2048, 00:19:46.360 "data_size": 63488 00:19:46.360 } 00:19:46.360 ] 00:19:46.360 }' 00:19:46.360 18:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:46.360 18:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:46.360 18:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:46.360 [2024-07-24 18:22:54.824341] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:46.360 18:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:46.360 18:22:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:46.360 [2024-07-24 18:22:54.929803] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:46.620 [2024-07-24 18:22:54.986772] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:46.620 [2024-07-24 18:22:55.172906] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:46.620 [2024-07-24 18:22:55.179465] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.620 [2024-07-24 18:22:55.179485] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:46.620 [2024-07-24 18:22:55.179492] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:46.620 [2024-07-24 18:22:55.189719] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1021360 00:19:46.620 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:46.620 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:46.620 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:46.620 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:46.620 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:46.620 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:46.620 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.620 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.879 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.879 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.879 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.879 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.879 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.879 "name": "raid_bdev1", 00:19:46.879 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:46.879 "strip_size_kb": 0, 00:19:46.879 "state": "online", 00:19:46.879 "raid_level": "raid1", 00:19:46.879 "superblock": true, 00:19:46.879 "num_base_bdevs": 2, 00:19:46.879 "num_base_bdevs_discovered": 1, 00:19:46.879 "num_base_bdevs_operational": 1, 00:19:46.879 "base_bdevs_list": [ 00:19:46.879 { 00:19:46.879 "name": null, 00:19:46.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.879 "is_configured": false, 00:19:46.879 "data_offset": 2048, 00:19:46.879 "data_size": 63488 00:19:46.879 }, 00:19:46.879 { 00:19:46.879 "name": "BaseBdev2", 00:19:46.879 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:46.879 "is_configured": true, 00:19:46.879 "data_offset": 2048, 00:19:46.879 "data_size": 63488 00:19:46.879 } 00:19:46.879 ] 00:19:46.879 }' 00:19:46.879 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.879 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:47.447 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.447 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:47.447 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:47.447 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:47.447 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:47.447 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.447 18:22:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.706 18:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:47.706 "name": "raid_bdev1", 00:19:47.706 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:47.706 "strip_size_kb": 0, 00:19:47.707 "state": "online", 00:19:47.707 "raid_level": "raid1", 00:19:47.707 "superblock": true, 00:19:47.707 "num_base_bdevs": 2, 00:19:47.707 "num_base_bdevs_discovered": 1, 00:19:47.707 "num_base_bdevs_operational": 1, 00:19:47.707 "base_bdevs_list": [ 00:19:47.707 { 00:19:47.707 "name": null, 00:19:47.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.707 "is_configured": false, 00:19:47.707 "data_offset": 2048, 00:19:47.707 "data_size": 63488 00:19:47.707 }, 00:19:47.707 { 00:19:47.707 "name": "BaseBdev2", 00:19:47.707 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:47.707 "is_configured": true, 00:19:47.707 "data_offset": 2048, 00:19:47.707 "data_size": 63488 00:19:47.707 } 00:19:47.707 ] 00:19:47.707 }' 00:19:47.707 18:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:47.707 18:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:47.707 18:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:47.707 18:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:47.707 18:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:47.707 [2024-07-24 18:22:56.274599] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:47.707 [2024-07-24 18:22:56.298262] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1022f20 00:19:47.707 [2024-07-24 18:22:56.299349] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:47.966 18:22:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:47.966 [2024-07-24 18:22:56.416807] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:47.966 [2024-07-24 18:22:56.417195] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:48.225 [2024-07-24 18:22:56.629770] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:48.225 [2024-07-24 18:22:56.629908] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:48.794 [2024-07-24 18:22:57.088324] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:48.794 [2024-07-24 18:22:57.088472] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:48.794 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.794 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:48.794 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:48.794 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:48.794 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:48.794 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.794 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.053 [2024-07-24 18:22:57.425395] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:49.053 [2024-07-24 18:22:57.425811] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:49.053 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:49.053 "name": "raid_bdev1", 00:19:49.053 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:49.053 "strip_size_kb": 0, 00:19:49.053 "state": "online", 00:19:49.053 "raid_level": "raid1", 00:19:49.053 "superblock": true, 00:19:49.053 "num_base_bdevs": 2, 00:19:49.053 "num_base_bdevs_discovered": 2, 00:19:49.053 "num_base_bdevs_operational": 2, 00:19:49.053 "process": { 00:19:49.053 "type": "rebuild", 00:19:49.053 "target": "spare", 00:19:49.053 "progress": { 00:19:49.053 "blocks": 14336, 00:19:49.053 "percent": 22 00:19:49.053 } 00:19:49.053 }, 00:19:49.053 "base_bdevs_list": [ 00:19:49.053 { 00:19:49.053 "name": "spare", 00:19:49.053 "uuid": "2042ca77-6ff4-5625-8ca5-886728e71063", 00:19:49.053 "is_configured": true, 00:19:49.053 "data_offset": 2048, 00:19:49.053 "data_size": 63488 00:19:49.053 }, 00:19:49.053 { 00:19:49.053 "name": "BaseBdev2", 00:19:49.053 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:49.053 "is_configured": true, 00:19:49.053 "data_offset": 2048, 00:19:49.053 "data_size": 63488 00:19:49.053 } 00:19:49.053 ] 00:19:49.053 }' 00:19:49.053 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:49.053 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.053 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:49.053 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.053 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:19:49.053 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:19:49.053 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:19:49.054 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:19:49.054 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:19:49.054 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:19:49.054 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=641 00:19:49.054 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:49.054 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.054 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:49.054 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:49.054 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:49.054 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:49.054 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.054 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.312 [2024-07-24 18:22:57.650954] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:49.312 [2024-07-24 18:22:57.651158] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:49.312 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:49.312 "name": "raid_bdev1", 00:19:49.312 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:49.312 "strip_size_kb": 0, 00:19:49.312 "state": "online", 00:19:49.312 "raid_level": "raid1", 00:19:49.312 "superblock": true, 00:19:49.312 "num_base_bdevs": 2, 00:19:49.312 "num_base_bdevs_discovered": 2, 00:19:49.312 "num_base_bdevs_operational": 2, 00:19:49.312 "process": { 00:19:49.312 "type": "rebuild", 00:19:49.312 "target": "spare", 00:19:49.312 "progress": { 00:19:49.312 "blocks": 16384, 00:19:49.312 "percent": 25 00:19:49.312 } 00:19:49.312 }, 00:19:49.312 "base_bdevs_list": [ 00:19:49.312 { 00:19:49.312 "name": "spare", 00:19:49.312 "uuid": "2042ca77-6ff4-5625-8ca5-886728e71063", 00:19:49.313 "is_configured": true, 00:19:49.313 "data_offset": 2048, 00:19:49.313 "data_size": 63488 00:19:49.313 }, 00:19:49.313 { 00:19:49.313 "name": "BaseBdev2", 00:19:49.313 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:49.313 "is_configured": true, 00:19:49.313 "data_offset": 2048, 00:19:49.313 "data_size": 63488 00:19:49.313 } 00:19:49.313 ] 00:19:49.313 }' 00:19:49.313 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:49.313 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.313 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:49.313 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.313 18:22:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:49.571 [2024-07-24 18:22:57.981666] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:49.571 [2024-07-24 18:22:57.981921] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:49.829 [2024-07-24 18:22:58.189556] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:49.829 [2024-07-24 18:22:58.403013] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:50.396 18:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:50.396 18:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.396 18:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:50.396 18:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:50.396 18:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:50.396 18:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:50.396 18:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.396 18:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.396 [2024-07-24 18:22:58.839114] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:50.655 18:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:50.655 "name": "raid_bdev1", 00:19:50.655 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:50.655 "strip_size_kb": 0, 00:19:50.655 "state": "online", 00:19:50.655 "raid_level": "raid1", 00:19:50.655 "superblock": true, 00:19:50.655 "num_base_bdevs": 2, 00:19:50.655 "num_base_bdevs_discovered": 2, 00:19:50.655 "num_base_bdevs_operational": 2, 00:19:50.655 "process": { 00:19:50.655 "type": "rebuild", 00:19:50.655 "target": "spare", 00:19:50.655 "progress": { 00:19:50.655 "blocks": 34816, 00:19:50.655 "percent": 54 00:19:50.655 } 00:19:50.655 }, 00:19:50.655 "base_bdevs_list": [ 00:19:50.655 { 00:19:50.655 "name": "spare", 00:19:50.655 "uuid": "2042ca77-6ff4-5625-8ca5-886728e71063", 00:19:50.655 "is_configured": true, 00:19:50.655 "data_offset": 2048, 00:19:50.655 "data_size": 63488 00:19:50.655 }, 00:19:50.655 { 00:19:50.655 "name": "BaseBdev2", 00:19:50.655 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:50.655 "is_configured": true, 00:19:50.655 "data_offset": 2048, 00:19:50.655 "data_size": 63488 00:19:50.655 } 00:19:50.655 ] 00:19:50.655 }' 00:19:50.655 18:22:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:50.655 18:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.655 18:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:50.655 18:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.655 18:22:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:51.629 [2024-07-24 18:22:59.911145] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:19:51.629 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:51.629 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.629 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:51.629 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:51.629 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:51.629 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:51.629 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.629 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.888 [2024-07-24 18:23:00.238645] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:51.888 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:51.888 "name": "raid_bdev1", 00:19:51.888 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:51.888 "strip_size_kb": 0, 00:19:51.888 "state": "online", 00:19:51.888 "raid_level": "raid1", 00:19:51.888 "superblock": true, 00:19:51.888 "num_base_bdevs": 2, 00:19:51.888 "num_base_bdevs_discovered": 2, 00:19:51.888 "num_base_bdevs_operational": 2, 00:19:51.888 "process": { 00:19:51.888 "type": "rebuild", 00:19:51.888 "target": "spare", 00:19:51.888 "progress": { 00:19:51.888 "blocks": 59392, 00:19:51.888 "percent": 93 00:19:51.888 } 00:19:51.888 }, 00:19:51.888 "base_bdevs_list": [ 00:19:51.888 { 00:19:51.888 "name": "spare", 00:19:51.888 "uuid": "2042ca77-6ff4-5625-8ca5-886728e71063", 00:19:51.888 "is_configured": true, 00:19:51.888 "data_offset": 2048, 00:19:51.888 "data_size": 63488 00:19:51.888 }, 00:19:51.888 { 00:19:51.888 "name": "BaseBdev2", 00:19:51.888 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:51.888 "is_configured": true, 00:19:51.888 "data_offset": 2048, 00:19:51.888 "data_size": 63488 00:19:51.888 } 00:19:51.888 ] 00:19:51.888 }' 00:19:51.888 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:51.888 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.888 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:51.888 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.888 18:23:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:52.147 [2024-07-24 18:23:00.558728] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:52.147 [2024-07-24 18:23:00.658974] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:52.147 [2024-07-24 18:23:00.665593] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.082 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:53.082 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.082 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:53.082 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:53.082 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:53.082 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:53.082 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.082 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.082 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:53.082 "name": "raid_bdev1", 00:19:53.082 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:53.082 "strip_size_kb": 0, 00:19:53.082 "state": "online", 00:19:53.082 "raid_level": "raid1", 00:19:53.082 "superblock": true, 00:19:53.082 "num_base_bdevs": 2, 00:19:53.082 "num_base_bdevs_discovered": 2, 00:19:53.082 "num_base_bdevs_operational": 2, 00:19:53.082 "base_bdevs_list": [ 00:19:53.082 { 00:19:53.082 "name": "spare", 00:19:53.082 "uuid": "2042ca77-6ff4-5625-8ca5-886728e71063", 00:19:53.082 "is_configured": true, 00:19:53.082 "data_offset": 2048, 00:19:53.082 "data_size": 63488 00:19:53.082 }, 00:19:53.082 { 00:19:53.082 "name": "BaseBdev2", 00:19:53.082 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:53.082 "is_configured": true, 00:19:53.082 "data_offset": 2048, 00:19:53.082 "data_size": 63488 00:19:53.082 } 00:19:53.082 ] 00:19:53.082 }' 00:19:53.082 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:53.082 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:53.082 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:53.082 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:53.083 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:19:53.083 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.083 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:53.083 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:53.083 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:53.083 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:53.083 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.083 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.341 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:53.341 "name": "raid_bdev1", 00:19:53.341 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:53.341 "strip_size_kb": 0, 00:19:53.341 "state": "online", 00:19:53.341 "raid_level": "raid1", 00:19:53.342 "superblock": true, 00:19:53.342 "num_base_bdevs": 2, 00:19:53.342 "num_base_bdevs_discovered": 2, 00:19:53.342 "num_base_bdevs_operational": 2, 00:19:53.342 "base_bdevs_list": [ 00:19:53.342 { 00:19:53.342 "name": "spare", 00:19:53.342 "uuid": "2042ca77-6ff4-5625-8ca5-886728e71063", 00:19:53.342 "is_configured": true, 00:19:53.342 "data_offset": 2048, 00:19:53.342 "data_size": 63488 00:19:53.342 }, 00:19:53.342 { 00:19:53.342 "name": "BaseBdev2", 00:19:53.342 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:53.342 "is_configured": true, 00:19:53.342 "data_offset": 2048, 00:19:53.342 "data_size": 63488 00:19:53.342 } 00:19:53.342 ] 00:19:53.342 }' 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.342 18:23:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.600 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:53.600 "name": "raid_bdev1", 00:19:53.601 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:53.601 "strip_size_kb": 0, 00:19:53.601 "state": "online", 00:19:53.601 "raid_level": "raid1", 00:19:53.601 "superblock": true, 00:19:53.601 "num_base_bdevs": 2, 00:19:53.601 "num_base_bdevs_discovered": 2, 00:19:53.601 "num_base_bdevs_operational": 2, 00:19:53.601 "base_bdevs_list": [ 00:19:53.601 { 00:19:53.601 "name": "spare", 00:19:53.601 "uuid": "2042ca77-6ff4-5625-8ca5-886728e71063", 00:19:53.601 "is_configured": true, 00:19:53.601 "data_offset": 2048, 00:19:53.601 "data_size": 63488 00:19:53.601 }, 00:19:53.601 { 00:19:53.601 "name": "BaseBdev2", 00:19:53.601 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:53.601 "is_configured": true, 00:19:53.601 "data_offset": 2048, 00:19:53.601 "data_size": 63488 00:19:53.601 } 00:19:53.601 ] 00:19:53.601 }' 00:19:53.601 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:53.601 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:54.169 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:54.169 [2024-07-24 18:23:02.670827] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:54.169 [2024-07-24 18:23:02.670853] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:54.169 00:19:54.169 Latency(us) 00:19:54.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.169 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:54.169 raid_bdev1 : 10.10 125.02 375.06 0.00 0.00 10878.63 235.93 113246.21 00:19:54.169 =================================================================================================================== 00:19:54.169 Total : 125.02 375.06 0.00 0.00 10878.63 235.93 113246.21 00:19:54.169 [2024-07-24 18:23:02.697510] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.169 [2024-07-24 18:23:02.697528] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.169 [2024-07-24 18:23:02.697578] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:54.169 [2024-07-24 18:23:02.697590] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe80030 name raid_bdev1, state offline 00:19:54.169 0 00:19:54.169 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.169 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:19:54.428 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:19:54.428 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:19:54.428 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:54.428 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:54.428 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:54.428 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:54.428 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:54.428 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:54.428 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:54.428 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:54.428 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:54.428 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:54.428 18:23:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:54.687 /dev/nbd0 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:54.687 1+0 records in 00:19:54.687 1+0 records out 00:19:54.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258866 s, 15.8 MB/s 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:54.687 /dev/nbd1 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:19:54.687 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:54.688 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:54.688 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:54.688 1+0 records in 00:19:54.688 1+0 records out 00:19:54.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259416 s, 15.8 MB/s 00:19:54.688 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:54.947 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:19:55.206 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:55.465 18:23:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:55.724 [2024-07-24 18:23:04.073710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:55.724 [2024-07-24 18:23:04.073742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.724 [2024-07-24 18:23:04.073755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe862f0 00:19:55.724 [2024-07-24 18:23:04.073779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.724 [2024-07-24 18:23:04.074945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.724 [2024-07-24 18:23:04.074966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:55.724 [2024-07-24 18:23:04.075020] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:55.724 [2024-07-24 18:23:04.075039] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:55.724 [2024-07-24 18:23:04.075109] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:55.724 spare 00:19:55.724 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:55.724 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:55.724 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:55.724 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:55.724 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:55.724 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:55.724 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.724 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.724 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.725 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.725 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.725 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.725 [2024-07-24 18:23:04.175397] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xe84a40 00:19:55.725 [2024-07-24 18:23:04.175408] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:55.725 [2024-07-24 18:23:04.175525] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x103aa00 00:19:55.725 [2024-07-24 18:23:04.175617] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe84a40 00:19:55.725 [2024-07-24 18:23:04.175624] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe84a40 00:19:55.725 [2024-07-24 18:23:04.175697] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.725 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.725 "name": "raid_bdev1", 00:19:55.725 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:55.725 "strip_size_kb": 0, 00:19:55.725 "state": "online", 00:19:55.725 "raid_level": "raid1", 00:19:55.725 "superblock": true, 00:19:55.725 "num_base_bdevs": 2, 00:19:55.725 "num_base_bdevs_discovered": 2, 00:19:55.725 "num_base_bdevs_operational": 2, 00:19:55.725 "base_bdevs_list": [ 00:19:55.725 { 00:19:55.725 "name": "spare", 00:19:55.725 "uuid": "2042ca77-6ff4-5625-8ca5-886728e71063", 00:19:55.725 "is_configured": true, 00:19:55.725 "data_offset": 2048, 00:19:55.725 "data_size": 63488 00:19:55.725 }, 00:19:55.725 { 00:19:55.725 "name": "BaseBdev2", 00:19:55.725 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:55.725 "is_configured": true, 00:19:55.725 "data_offset": 2048, 00:19:55.725 "data_size": 63488 00:19:55.725 } 00:19:55.725 ] 00:19:55.725 }' 00:19:55.725 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.725 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:56.294 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.294 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:56.295 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:56.295 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:56.295 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:56.295 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.295 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.554 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:56.554 "name": "raid_bdev1", 00:19:56.554 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:56.554 "strip_size_kb": 0, 00:19:56.554 "state": "online", 00:19:56.554 "raid_level": "raid1", 00:19:56.554 "superblock": true, 00:19:56.554 "num_base_bdevs": 2, 00:19:56.554 "num_base_bdevs_discovered": 2, 00:19:56.554 "num_base_bdevs_operational": 2, 00:19:56.554 "base_bdevs_list": [ 00:19:56.554 { 00:19:56.554 "name": "spare", 00:19:56.554 "uuid": "2042ca77-6ff4-5625-8ca5-886728e71063", 00:19:56.554 "is_configured": true, 00:19:56.554 "data_offset": 2048, 00:19:56.554 "data_size": 63488 00:19:56.554 }, 00:19:56.554 { 00:19:56.554 "name": "BaseBdev2", 00:19:56.554 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:56.554 "is_configured": true, 00:19:56.554 "data_offset": 2048, 00:19:56.554 "data_size": 63488 00:19:56.554 } 00:19:56.554 ] 00:19:56.554 }' 00:19:56.554 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:56.554 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:56.554 18:23:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:56.554 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:56.554 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.554 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:56.813 [2024-07-24 18:23:05.337150] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.813 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.073 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:57.073 "name": "raid_bdev1", 00:19:57.073 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:57.073 "strip_size_kb": 0, 00:19:57.073 "state": "online", 00:19:57.073 "raid_level": "raid1", 00:19:57.073 "superblock": true, 00:19:57.073 "num_base_bdevs": 2, 00:19:57.073 "num_base_bdevs_discovered": 1, 00:19:57.073 "num_base_bdevs_operational": 1, 00:19:57.073 "base_bdevs_list": [ 00:19:57.073 { 00:19:57.073 "name": null, 00:19:57.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.073 "is_configured": false, 00:19:57.073 "data_offset": 2048, 00:19:57.073 "data_size": 63488 00:19:57.073 }, 00:19:57.073 { 00:19:57.073 "name": "BaseBdev2", 00:19:57.073 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:57.073 "is_configured": true, 00:19:57.073 "data_offset": 2048, 00:19:57.073 "data_size": 63488 00:19:57.073 } 00:19:57.073 ] 00:19:57.073 }' 00:19:57.073 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:57.073 18:23:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:57.642 18:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:57.642 [2024-07-24 18:23:06.163362] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.642 [2024-07-24 18:23:06.163486] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:57.642 [2024-07-24 18:23:06.163498] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:57.642 [2024-07-24 18:23:06.163517] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.642 [2024-07-24 18:23:06.168226] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1021630 00:19:57.642 [2024-07-24 18:23:06.169955] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:57.642 18:23:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:19:59.022 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.022 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:59.022 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:59.022 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:59.022 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:59.022 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.022 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.022 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:59.022 "name": "raid_bdev1", 00:19:59.022 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:59.022 "strip_size_kb": 0, 00:19:59.022 "state": "online", 00:19:59.022 "raid_level": "raid1", 00:19:59.022 "superblock": true, 00:19:59.022 "num_base_bdevs": 2, 00:19:59.022 "num_base_bdevs_discovered": 2, 00:19:59.022 "num_base_bdevs_operational": 2, 00:19:59.022 "process": { 00:19:59.022 "type": "rebuild", 00:19:59.022 "target": "spare", 00:19:59.022 "progress": { 00:19:59.022 "blocks": 22528, 00:19:59.022 "percent": 35 00:19:59.022 } 00:19:59.022 }, 00:19:59.022 "base_bdevs_list": [ 00:19:59.022 { 00:19:59.022 "name": "spare", 00:19:59.022 "uuid": "2042ca77-6ff4-5625-8ca5-886728e71063", 00:19:59.022 "is_configured": true, 00:19:59.022 "data_offset": 2048, 00:19:59.022 "data_size": 63488 00:19:59.022 }, 00:19:59.022 { 00:19:59.022 "name": "BaseBdev2", 00:19:59.022 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:59.022 "is_configured": true, 00:19:59.022 "data_offset": 2048, 00:19:59.022 "data_size": 63488 00:19:59.022 } 00:19:59.022 ] 00:19:59.022 }' 00:19:59.022 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:59.022 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.022 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:59.022 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.022 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:59.022 [2024-07-24 18:23:07.592594] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:59.282 [2024-07-24 18:23:07.680344] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:59.282 [2024-07-24 18:23:07.680376] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.282 [2024-07-24 18:23:07.680385] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:59.282 [2024-07-24 18:23:07.680390] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:59.282 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:59.282 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:59.282 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:59.282 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:59.282 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:59.282 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:59.282 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.282 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.282 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.282 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.282 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.282 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.541 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.541 "name": "raid_bdev1", 00:19:59.541 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:19:59.541 "strip_size_kb": 0, 00:19:59.541 "state": "online", 00:19:59.541 "raid_level": "raid1", 00:19:59.541 "superblock": true, 00:19:59.541 "num_base_bdevs": 2, 00:19:59.541 "num_base_bdevs_discovered": 1, 00:19:59.541 "num_base_bdevs_operational": 1, 00:19:59.541 "base_bdevs_list": [ 00:19:59.541 { 00:19:59.541 "name": null, 00:19:59.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.541 "is_configured": false, 00:19:59.541 "data_offset": 2048, 00:19:59.541 "data_size": 63488 00:19:59.541 }, 00:19:59.541 { 00:19:59.541 "name": "BaseBdev2", 00:19:59.541 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:19:59.541 "is_configured": true, 00:19:59.541 "data_offset": 2048, 00:19:59.541 "data_size": 63488 00:19:59.541 } 00:19:59.541 ] 00:19:59.541 }' 00:19:59.541 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.541 18:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:59.799 18:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:00.058 [2024-07-24 18:23:08.502826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:00.058 [2024-07-24 18:23:08.502870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.058 [2024-07-24 18:23:08.502886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe87a70 00:20:00.058 [2024-07-24 18:23:08.502894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.058 [2024-07-24 18:23:08.503185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.058 [2024-07-24 18:23:08.503199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:00.058 [2024-07-24 18:23:08.503260] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:00.059 [2024-07-24 18:23:08.503269] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:00.059 [2024-07-24 18:23:08.503276] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:00.059 [2024-07-24 18:23:08.503289] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:00.059 [2024-07-24 18:23:08.508022] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x103aa00 00:20:00.059 spare 00:20:00.059 [2024-07-24 18:23:08.509060] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:00.059 18:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:20:00.996 18:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.996 18:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:00.996 18:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:00.996 18:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:00.996 18:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:00.996 18:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.996 18:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.255 18:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:01.255 "name": "raid_bdev1", 00:20:01.255 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:20:01.255 "strip_size_kb": 0, 00:20:01.255 "state": "online", 00:20:01.255 "raid_level": "raid1", 00:20:01.255 "superblock": true, 00:20:01.255 "num_base_bdevs": 2, 00:20:01.255 "num_base_bdevs_discovered": 2, 00:20:01.255 "num_base_bdevs_operational": 2, 00:20:01.255 "process": { 00:20:01.255 "type": "rebuild", 00:20:01.255 "target": "spare", 00:20:01.255 "progress": { 00:20:01.255 "blocks": 22528, 00:20:01.255 "percent": 35 00:20:01.255 } 00:20:01.255 }, 00:20:01.255 "base_bdevs_list": [ 00:20:01.255 { 00:20:01.255 "name": "spare", 00:20:01.255 "uuid": "2042ca77-6ff4-5625-8ca5-886728e71063", 00:20:01.255 "is_configured": true, 00:20:01.255 "data_offset": 2048, 00:20:01.255 "data_size": 63488 00:20:01.255 }, 00:20:01.255 { 00:20:01.255 "name": "BaseBdev2", 00:20:01.255 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:20:01.255 "is_configured": true, 00:20:01.255 "data_offset": 2048, 00:20:01.255 "data_size": 63488 00:20:01.255 } 00:20:01.255 ] 00:20:01.255 }' 00:20:01.255 18:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:01.256 18:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:01.256 18:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:01.256 18:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.256 18:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:01.515 [2024-07-24 18:23:09.920046] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:01.515 [2024-07-24 18:23:10.019517] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:01.515 [2024-07-24 18:23:10.019551] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.515 [2024-07-24 18:23:10.019561] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:01.515 [2024-07-24 18:23:10.019566] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:01.515 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:01.515 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:01.515 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:01.515 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:01.515 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:01.515 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:01.515 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:01.515 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:01.515 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:01.515 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:01.515 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.515 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.774 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:01.774 "name": "raid_bdev1", 00:20:01.774 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:20:01.774 "strip_size_kb": 0, 00:20:01.774 "state": "online", 00:20:01.774 "raid_level": "raid1", 00:20:01.774 "superblock": true, 00:20:01.774 "num_base_bdevs": 2, 00:20:01.774 "num_base_bdevs_discovered": 1, 00:20:01.774 "num_base_bdevs_operational": 1, 00:20:01.774 "base_bdevs_list": [ 00:20:01.774 { 00:20:01.774 "name": null, 00:20:01.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.774 "is_configured": false, 00:20:01.774 "data_offset": 2048, 00:20:01.774 "data_size": 63488 00:20:01.774 }, 00:20:01.774 { 00:20:01.774 "name": "BaseBdev2", 00:20:01.774 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:20:01.774 "is_configured": true, 00:20:01.774 "data_offset": 2048, 00:20:01.774 "data_size": 63488 00:20:01.774 } 00:20:01.774 ] 00:20:01.774 }' 00:20:01.774 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:01.774 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.343 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:02.343 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:02.343 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:02.343 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:02.343 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:02.343 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.343 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.343 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:02.343 "name": "raid_bdev1", 00:20:02.343 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:20:02.343 "strip_size_kb": 0, 00:20:02.343 "state": "online", 00:20:02.343 "raid_level": "raid1", 00:20:02.343 "superblock": true, 00:20:02.343 "num_base_bdevs": 2, 00:20:02.343 "num_base_bdevs_discovered": 1, 00:20:02.343 "num_base_bdevs_operational": 1, 00:20:02.343 "base_bdevs_list": [ 00:20:02.343 { 00:20:02.343 "name": null, 00:20:02.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.343 "is_configured": false, 00:20:02.343 "data_offset": 2048, 00:20:02.343 "data_size": 63488 00:20:02.343 }, 00:20:02.343 { 00:20:02.343 "name": "BaseBdev2", 00:20:02.343 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:20:02.343 "is_configured": true, 00:20:02.343 "data_offset": 2048, 00:20:02.343 "data_size": 63488 00:20:02.343 } 00:20:02.343 ] 00:20:02.343 }' 00:20:02.343 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:02.343 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:02.343 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:02.343 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:02.343 18:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:02.602 18:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:02.862 [2024-07-24 18:23:11.255088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:02.862 [2024-07-24 18:23:11.255124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.862 [2024-07-24 18:23:11.255141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1020380 00:20:02.862 [2024-07-24 18:23:11.255150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.862 [2024-07-24 18:23:11.255419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.862 [2024-07-24 18:23:11.255432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:02.862 [2024-07-24 18:23:11.255482] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:02.862 [2024-07-24 18:23:11.255490] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:02.862 [2024-07-24 18:23:11.255497] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:02.862 BaseBdev1 00:20:02.862 18:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:20:03.799 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:03.799 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:03.800 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:03.800 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:03.800 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:03.800 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:03.800 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:03.800 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:03.800 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:03.800 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:03.800 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.800 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.059 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.059 "name": "raid_bdev1", 00:20:04.059 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:20:04.059 "strip_size_kb": 0, 00:20:04.059 "state": "online", 00:20:04.059 "raid_level": "raid1", 00:20:04.059 "superblock": true, 00:20:04.059 "num_base_bdevs": 2, 00:20:04.059 "num_base_bdevs_discovered": 1, 00:20:04.059 "num_base_bdevs_operational": 1, 00:20:04.059 "base_bdevs_list": [ 00:20:04.059 { 00:20:04.059 "name": null, 00:20:04.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.059 "is_configured": false, 00:20:04.059 "data_offset": 2048, 00:20:04.059 "data_size": 63488 00:20:04.059 }, 00:20:04.059 { 00:20:04.059 "name": "BaseBdev2", 00:20:04.059 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:20:04.059 "is_configured": true, 00:20:04.059 "data_offset": 2048, 00:20:04.059 "data_size": 63488 00:20:04.059 } 00:20:04.059 ] 00:20:04.059 }' 00:20:04.059 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.059 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:04.627 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:04.627 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:04.628 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:04.628 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:04.628 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:04.628 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.628 18:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:04.628 "name": "raid_bdev1", 00:20:04.628 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:20:04.628 "strip_size_kb": 0, 00:20:04.628 "state": "online", 00:20:04.628 "raid_level": "raid1", 00:20:04.628 "superblock": true, 00:20:04.628 "num_base_bdevs": 2, 00:20:04.628 "num_base_bdevs_discovered": 1, 00:20:04.628 "num_base_bdevs_operational": 1, 00:20:04.628 "base_bdevs_list": [ 00:20:04.628 { 00:20:04.628 "name": null, 00:20:04.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.628 "is_configured": false, 00:20:04.628 "data_offset": 2048, 00:20:04.628 "data_size": 63488 00:20:04.628 }, 00:20:04.628 { 00:20:04.628 "name": "BaseBdev2", 00:20:04.628 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:20:04.628 "is_configured": true, 00:20:04.628 "data_offset": 2048, 00:20:04.628 "data_size": 63488 00:20:04.628 } 00:20:04.628 ] 00:20:04.628 }' 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:04.628 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.887 [2024-07-24 18:23:13.368753] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.887 [2024-07-24 18:23:13.368871] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:04.887 [2024-07-24 18:23:13.368881] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:04.887 request: 00:20:04.887 { 00:20:04.887 "base_bdev": "BaseBdev1", 00:20:04.887 "raid_bdev": "raid_bdev1", 00:20:04.887 "method": "bdev_raid_add_base_bdev", 00:20:04.887 "req_id": 1 00:20:04.887 } 00:20:04.887 Got JSON-RPC error response 00:20:04.887 response: 00:20:04.887 { 00:20:04.887 "code": -22, 00:20:04.887 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:04.887 } 00:20:04.887 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:20:04.887 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:04.887 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:04.887 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:04.887 18:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:20:05.823 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:05.823 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:05.823 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:05.823 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:05.823 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:05.823 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:20:05.823 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:05.823 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:05.823 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:05.823 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:05.823 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.823 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.082 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:06.082 "name": "raid_bdev1", 00:20:06.082 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:20:06.082 "strip_size_kb": 0, 00:20:06.082 "state": "online", 00:20:06.082 "raid_level": "raid1", 00:20:06.082 "superblock": true, 00:20:06.082 "num_base_bdevs": 2, 00:20:06.082 "num_base_bdevs_discovered": 1, 00:20:06.082 "num_base_bdevs_operational": 1, 00:20:06.082 "base_bdevs_list": [ 00:20:06.082 { 00:20:06.082 "name": null, 00:20:06.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.082 "is_configured": false, 00:20:06.082 "data_offset": 2048, 00:20:06.082 "data_size": 63488 00:20:06.082 }, 00:20:06.082 { 00:20:06.082 "name": "BaseBdev2", 00:20:06.082 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:20:06.082 "is_configured": true, 00:20:06.082 "data_offset": 2048, 00:20:06.082 "data_size": 63488 00:20:06.082 } 00:20:06.082 ] 00:20:06.082 }' 00:20:06.082 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:06.082 18:23:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.651 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:06.651 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:06.651 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:06.651 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:06.651 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:06.651 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.651 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.651 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:06.651 "name": "raid_bdev1", 00:20:06.651 "uuid": "e3a87fed-d64a-45e4-ab9a-a4e47916fb94", 00:20:06.651 "strip_size_kb": 0, 00:20:06.651 "state": "online", 00:20:06.651 "raid_level": "raid1", 00:20:06.651 "superblock": true, 00:20:06.651 "num_base_bdevs": 2, 00:20:06.651 "num_base_bdevs_discovered": 1, 00:20:06.651 "num_base_bdevs_operational": 1, 00:20:06.651 "base_bdevs_list": [ 00:20:06.651 { 00:20:06.651 "name": null, 00:20:06.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.651 "is_configured": false, 00:20:06.651 "data_offset": 2048, 00:20:06.651 "data_size": 63488 00:20:06.652 }, 00:20:06.652 { 00:20:06.652 "name": "BaseBdev2", 00:20:06.652 "uuid": "8587cfea-e004-5648-95ea-741fb4224f1b", 00:20:06.652 "is_configured": true, 00:20:06.652 "data_offset": 2048, 00:20:06.652 "data_size": 63488 00:20:06.652 } 00:20:06.652 ] 00:20:06.652 }' 00:20:06.652 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2268679 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 2268679 ']' 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 2268679 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2268679 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2268679' 00:20:06.969 killing process with pid 2268679 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 2268679 00:20:06.969 Received shutdown signal, test time was about 22.746359 seconds 00:20:06.969 00:20:06.969 Latency(us) 00:20:06.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.969 =================================================================================================================== 00:20:06.969 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.969 [2024-07-24 18:23:15.369670] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.969 [2024-07-24 18:23:15.369747] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.969 [2024-07-24 18:23:15.369783] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.969 [2024-07-24 18:23:15.369796] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe84a40 name raid_bdev1, state offline 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 2268679 00:20:06.969 [2024-07-24 18:23:15.388003] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:06.969 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:20:06.969 00:20:06.969 real 0m26.357s 00:20:06.969 user 0m39.625s 00:20:06.969 sys 0m3.797s 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.228 ************************************ 00:20:07.228 END TEST raid_rebuild_test_sb_io 00:20:07.228 ************************************ 00:20:07.228 18:23:15 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:20:07.228 18:23:15 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:20:07.228 18:23:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:07.228 18:23:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:07.228 18:23:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:07.228 ************************************ 00:20:07.228 START TEST raid_rebuild_test 00:20:07.228 ************************************ 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=2273643 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 2273643 /var/tmp/spdk-raid.sock 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 2273643 ']' 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:07.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.228 18:23:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.228 [2024-07-24 18:23:15.718925] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:20:07.228 [2024-07-24 18:23:15.718969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273643 ] 00:20:07.228 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:07.228 Zero copy mechanism will not be used. 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:01.0 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:01.1 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:01.2 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:01.3 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:01.4 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:01.5 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:01.6 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:01.7 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:02.0 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:02.1 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:02.2 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:02.3 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:02.4 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:02.5 cannot be used 00:20:07.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.228 EAL: Requested device 0000:b3:02.6 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b3:02.7 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:01.0 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:01.1 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:01.2 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:01.3 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:01.4 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:01.5 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:01.6 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:01.7 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:02.0 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:02.1 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:02.2 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:02.3 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:02.4 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:02.5 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:02.6 cannot be used 00:20:07.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:07.229 EAL: Requested device 0000:b5:02.7 cannot be used 00:20:07.229 [2024-07-24 18:23:15.812959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.488 [2024-07-24 18:23:15.887117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.488 [2024-07-24 18:23:15.941729] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:07.488 [2024-07-24 18:23:15.941753] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.056 18:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.056 18:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:20:08.056 18:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:08.056 18:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:08.316 BaseBdev1_malloc 00:20:08.316 18:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:08.316 [2024-07-24 18:23:16.834105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:08.316 [2024-07-24 18:23:16.834138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.316 [2024-07-24 18:23:16.834154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2242370 00:20:08.316 [2024-07-24 18:23:16.834162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.316 [2024-07-24 18:23:16.835253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.316 [2024-07-24 18:23:16.835275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:08.316 BaseBdev1 00:20:08.316 18:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:08.316 18:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:08.575 BaseBdev2_malloc 00:20:08.575 18:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:08.834 [2024-07-24 18:23:17.182809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:08.834 [2024-07-24 18:23:17.182843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.834 [2024-07-24 18:23:17.182855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23e5e70 00:20:08.834 [2024-07-24 18:23:17.182863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.834 [2024-07-24 18:23:17.183942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.834 [2024-07-24 18:23:17.183964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:08.834 BaseBdev2 00:20:08.834 18:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:08.835 18:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:08.835 BaseBdev3_malloc 00:20:08.835 18:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:09.094 [2024-07-24 18:23:17.499260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:09.094 [2024-07-24 18:23:17.499292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.094 [2024-07-24 18:23:17.499304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23dc160 00:20:09.094 [2024-07-24 18:23:17.499312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.094 [2024-07-24 18:23:17.500330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.094 [2024-07-24 18:23:17.500353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:09.094 BaseBdev3 00:20:09.094 18:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:09.094 18:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:09.094 BaseBdev4_malloc 00:20:09.094 18:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:09.353 [2024-07-24 18:23:17.839704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:09.353 [2024-07-24 18:23:17.839738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.353 [2024-07-24 18:23:17.839751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23dca80 00:20:09.353 [2024-07-24 18:23:17.839759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.353 [2024-07-24 18:23:17.840756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.353 [2024-07-24 18:23:17.840776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:09.353 BaseBdev4 00:20:09.353 18:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:09.638 spare_malloc 00:20:09.638 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:09.638 spare_delay 00:20:09.638 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:09.897 [2024-07-24 18:23:18.332428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:09.897 [2024-07-24 18:23:18.332462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.897 [2024-07-24 18:23:18.332476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x223bb70 00:20:09.897 [2024-07-24 18:23:18.332484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.897 [2024-07-24 18:23:18.333562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.897 [2024-07-24 18:23:18.333585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:09.897 spare 00:20:09.897 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:09.897 [2024-07-24 18:23:18.492865] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.157 [2024-07-24 18:23:18.493757] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:10.157 [2024-07-24 18:23:18.493798] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:10.157 [2024-07-24 18:23:18.493828] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:10.157 [2024-07-24 18:23:18.493880] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x223e370 00:20:10.157 [2024-07-24 18:23:18.493887] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:10.157 [2024-07-24 18:23:18.494030] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2241140 00:20:10.157 [2024-07-24 18:23:18.494136] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x223e370 00:20:10.157 [2024-07-24 18:23:18.494142] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x223e370 00:20:10.157 [2024-07-24 18:23:18.494216] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.157 "name": "raid_bdev1", 00:20:10.157 "uuid": "1456ddb9-3451-428a-9524-52d2704a85f9", 00:20:10.157 "strip_size_kb": 0, 00:20:10.157 "state": "online", 00:20:10.157 "raid_level": "raid1", 00:20:10.157 "superblock": false, 00:20:10.157 "num_base_bdevs": 4, 00:20:10.157 "num_base_bdevs_discovered": 4, 00:20:10.157 "num_base_bdevs_operational": 4, 00:20:10.157 "base_bdevs_list": [ 00:20:10.157 { 00:20:10.157 "name": "BaseBdev1", 00:20:10.157 "uuid": "d11fc011-beb1-5064-bef4-d92d72590daf", 00:20:10.157 "is_configured": true, 00:20:10.157 "data_offset": 0, 00:20:10.157 "data_size": 65536 00:20:10.157 }, 00:20:10.157 { 00:20:10.157 "name": "BaseBdev2", 00:20:10.157 "uuid": "2e9d15b8-1286-5582-914c-e85d42c1597a", 00:20:10.157 "is_configured": true, 00:20:10.157 "data_offset": 0, 00:20:10.157 "data_size": 65536 00:20:10.157 }, 00:20:10.157 { 00:20:10.157 "name": "BaseBdev3", 00:20:10.157 "uuid": "a840516c-f148-51b1-9e6b-a415f5dc83db", 00:20:10.157 "is_configured": true, 00:20:10.157 "data_offset": 0, 00:20:10.157 "data_size": 65536 00:20:10.157 }, 00:20:10.157 { 00:20:10.157 "name": "BaseBdev4", 00:20:10.157 "uuid": "e70e4b38-0d0c-5776-acb5-231012b85012", 00:20:10.157 "is_configured": true, 00:20:10.157 "data_offset": 0, 00:20:10.157 "data_size": 65536 00:20:10.157 } 00:20:10.157 ] 00:20:10.157 }' 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.157 18:23:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.726 18:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:10.726 18:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:20:10.726 [2024-07-24 18:23:19.315166] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:10.985 18:23:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:11.245 [2024-07-24 18:23:19.667900] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x223de40 00:20:11.245 /dev/nbd0 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:11.245 1+0 records in 00:20:11.245 1+0 records out 00:20:11.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018067 s, 22.7 MB/s 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:20:11.245 18:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:15.438 65536+0 records in 00:20:15.439 65536+0 records out 00:20:15.439 33554432 bytes (34 MB, 32 MiB) copied, 4.27018 s, 7.9 MB/s 00:20:15.439 18:23:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:15.439 18:23:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:15.439 18:23:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:15.439 18:23:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:15.439 18:23:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:15.439 18:23:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:15.439 18:23:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:15.698 18:23:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:15.698 18:23:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:15.698 18:23:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:15.698 18:23:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:15.698 [2024-07-24 18:23:24.180139] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.698 18:23:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:15.698 18:23:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:15.698 18:23:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:15.698 18:23:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:15.698 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:15.957 [2024-07-24 18:23:24.340587] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:15.957 "name": "raid_bdev1", 00:20:15.957 "uuid": "1456ddb9-3451-428a-9524-52d2704a85f9", 00:20:15.957 "strip_size_kb": 0, 00:20:15.957 "state": "online", 00:20:15.957 "raid_level": "raid1", 00:20:15.957 "superblock": false, 00:20:15.957 "num_base_bdevs": 4, 00:20:15.957 "num_base_bdevs_discovered": 3, 00:20:15.957 "num_base_bdevs_operational": 3, 00:20:15.957 "base_bdevs_list": [ 00:20:15.957 { 00:20:15.957 "name": null, 00:20:15.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.957 "is_configured": false, 00:20:15.957 "data_offset": 0, 00:20:15.957 "data_size": 65536 00:20:15.957 }, 00:20:15.957 { 00:20:15.957 "name": "BaseBdev2", 00:20:15.957 "uuid": "2e9d15b8-1286-5582-914c-e85d42c1597a", 00:20:15.957 "is_configured": true, 00:20:15.957 "data_offset": 0, 00:20:15.957 "data_size": 65536 00:20:15.957 }, 00:20:15.957 { 00:20:15.957 "name": "BaseBdev3", 00:20:15.957 "uuid": "a840516c-f148-51b1-9e6b-a415f5dc83db", 00:20:15.957 "is_configured": true, 00:20:15.957 "data_offset": 0, 00:20:15.957 "data_size": 65536 00:20:15.957 }, 00:20:15.957 { 00:20:15.957 "name": "BaseBdev4", 00:20:15.957 "uuid": "e70e4b38-0d0c-5776-acb5-231012b85012", 00:20:15.957 "is_configured": true, 00:20:15.957 "data_offset": 0, 00:20:15.957 "data_size": 65536 00:20:15.957 } 00:20:15.957 ] 00:20:15.957 }' 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:15.957 18:23:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.525 18:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:16.784 [2024-07-24 18:23:25.158727] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:16.784 [2024-07-24 18:23:25.162299] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2241260 00:20:16.784 [2024-07-24 18:23:25.163836] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:16.784 18:23:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:20:17.722 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.722 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:17.722 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:17.722 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:17.722 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:17.722 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.722 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.981 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:17.981 "name": "raid_bdev1", 00:20:17.981 "uuid": "1456ddb9-3451-428a-9524-52d2704a85f9", 00:20:17.981 "strip_size_kb": 0, 00:20:17.981 "state": "online", 00:20:17.981 "raid_level": "raid1", 00:20:17.981 "superblock": false, 00:20:17.981 "num_base_bdevs": 4, 00:20:17.981 "num_base_bdevs_discovered": 4, 00:20:17.981 "num_base_bdevs_operational": 4, 00:20:17.981 "process": { 00:20:17.981 "type": "rebuild", 00:20:17.981 "target": "spare", 00:20:17.981 "progress": { 00:20:17.981 "blocks": 22528, 00:20:17.981 "percent": 34 00:20:17.981 } 00:20:17.981 }, 00:20:17.981 "base_bdevs_list": [ 00:20:17.981 { 00:20:17.981 "name": "spare", 00:20:17.981 "uuid": "51fc3f29-a676-5ea3-a7b5-7debcc24b133", 00:20:17.981 "is_configured": true, 00:20:17.981 "data_offset": 0, 00:20:17.981 "data_size": 65536 00:20:17.981 }, 00:20:17.981 { 00:20:17.981 "name": "BaseBdev2", 00:20:17.981 "uuid": "2e9d15b8-1286-5582-914c-e85d42c1597a", 00:20:17.981 "is_configured": true, 00:20:17.981 "data_offset": 0, 00:20:17.981 "data_size": 65536 00:20:17.981 }, 00:20:17.981 { 00:20:17.981 "name": "BaseBdev3", 00:20:17.981 "uuid": "a840516c-f148-51b1-9e6b-a415f5dc83db", 00:20:17.981 "is_configured": true, 00:20:17.981 "data_offset": 0, 00:20:17.981 "data_size": 65536 00:20:17.981 }, 00:20:17.981 { 00:20:17.981 "name": "BaseBdev4", 00:20:17.981 "uuid": "e70e4b38-0d0c-5776-acb5-231012b85012", 00:20:17.981 "is_configured": true, 00:20:17.981 "data_offset": 0, 00:20:17.981 "data_size": 65536 00:20:17.981 } 00:20:17.981 ] 00:20:17.981 }' 00:20:17.981 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:17.981 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.981 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:17.981 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.981 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:18.241 [2024-07-24 18:23:26.583980] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:18.241 [2024-07-24 18:23:26.674226] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:18.241 [2024-07-24 18:23:26.674256] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.241 [2024-07-24 18:23:26.674268] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:18.241 [2024-07-24 18:23:26.674273] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:18.241 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:18.241 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:18.241 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:18.241 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:18.241 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:18.241 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:18.241 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.241 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.241 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.241 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.241 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.241 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.500 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.500 "name": "raid_bdev1", 00:20:18.500 "uuid": "1456ddb9-3451-428a-9524-52d2704a85f9", 00:20:18.500 "strip_size_kb": 0, 00:20:18.500 "state": "online", 00:20:18.500 "raid_level": "raid1", 00:20:18.500 "superblock": false, 00:20:18.500 "num_base_bdevs": 4, 00:20:18.500 "num_base_bdevs_discovered": 3, 00:20:18.500 "num_base_bdevs_operational": 3, 00:20:18.500 "base_bdevs_list": [ 00:20:18.500 { 00:20:18.500 "name": null, 00:20:18.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.500 "is_configured": false, 00:20:18.500 "data_offset": 0, 00:20:18.500 "data_size": 65536 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "name": "BaseBdev2", 00:20:18.501 "uuid": "2e9d15b8-1286-5582-914c-e85d42c1597a", 00:20:18.501 "is_configured": true, 00:20:18.501 "data_offset": 0, 00:20:18.501 "data_size": 65536 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "name": "BaseBdev3", 00:20:18.501 "uuid": "a840516c-f148-51b1-9e6b-a415f5dc83db", 00:20:18.501 "is_configured": true, 00:20:18.501 "data_offset": 0, 00:20:18.501 "data_size": 65536 00:20:18.501 }, 00:20:18.501 { 00:20:18.501 "name": "BaseBdev4", 00:20:18.501 "uuid": "e70e4b38-0d0c-5776-acb5-231012b85012", 00:20:18.501 "is_configured": true, 00:20:18.501 "data_offset": 0, 00:20:18.501 "data_size": 65536 00:20:18.501 } 00:20:18.501 ] 00:20:18.501 }' 00:20:18.501 18:23:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.501 18:23:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.760 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.020 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:19.020 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:19.020 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:19.020 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:19.020 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.020 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.020 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:19.020 "name": "raid_bdev1", 00:20:19.020 "uuid": "1456ddb9-3451-428a-9524-52d2704a85f9", 00:20:19.020 "strip_size_kb": 0, 00:20:19.020 "state": "online", 00:20:19.020 "raid_level": "raid1", 00:20:19.020 "superblock": false, 00:20:19.020 "num_base_bdevs": 4, 00:20:19.020 "num_base_bdevs_discovered": 3, 00:20:19.020 "num_base_bdevs_operational": 3, 00:20:19.020 "base_bdevs_list": [ 00:20:19.020 { 00:20:19.020 "name": null, 00:20:19.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.020 "is_configured": false, 00:20:19.020 "data_offset": 0, 00:20:19.020 "data_size": 65536 00:20:19.020 }, 00:20:19.020 { 00:20:19.020 "name": "BaseBdev2", 00:20:19.020 "uuid": "2e9d15b8-1286-5582-914c-e85d42c1597a", 00:20:19.020 "is_configured": true, 00:20:19.020 "data_offset": 0, 00:20:19.020 "data_size": 65536 00:20:19.020 }, 00:20:19.020 { 00:20:19.020 "name": "BaseBdev3", 00:20:19.020 "uuid": "a840516c-f148-51b1-9e6b-a415f5dc83db", 00:20:19.020 "is_configured": true, 00:20:19.020 "data_offset": 0, 00:20:19.020 "data_size": 65536 00:20:19.020 }, 00:20:19.020 { 00:20:19.020 "name": "BaseBdev4", 00:20:19.020 "uuid": "e70e4b38-0d0c-5776-acb5-231012b85012", 00:20:19.020 "is_configured": true, 00:20:19.020 "data_offset": 0, 00:20:19.020 "data_size": 65536 00:20:19.020 } 00:20:19.020 ] 00:20:19.020 }' 00:20:19.020 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:19.020 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:19.020 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:19.020 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:19.020 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:19.279 [2024-07-24 18:23:27.756615] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.280 [2024-07-24 18:23:27.760198] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x223a250 00:20:19.280 [2024-07-24 18:23:27.761223] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:19.280 18:23:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:20.215 18:23:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.215 18:23:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:20.215 18:23:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:20.215 18:23:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:20.215 18:23:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:20.215 18:23:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.215 18:23:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.474 18:23:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:20.474 "name": "raid_bdev1", 00:20:20.474 "uuid": "1456ddb9-3451-428a-9524-52d2704a85f9", 00:20:20.474 "strip_size_kb": 0, 00:20:20.474 "state": "online", 00:20:20.474 "raid_level": "raid1", 00:20:20.474 "superblock": false, 00:20:20.474 "num_base_bdevs": 4, 00:20:20.474 "num_base_bdevs_discovered": 4, 00:20:20.474 "num_base_bdevs_operational": 4, 00:20:20.474 "process": { 00:20:20.474 "type": "rebuild", 00:20:20.474 "target": "spare", 00:20:20.474 "progress": { 00:20:20.474 "blocks": 22528, 00:20:20.474 "percent": 34 00:20:20.474 } 00:20:20.474 }, 00:20:20.474 "base_bdevs_list": [ 00:20:20.474 { 00:20:20.474 "name": "spare", 00:20:20.474 "uuid": "51fc3f29-a676-5ea3-a7b5-7debcc24b133", 00:20:20.474 "is_configured": true, 00:20:20.474 "data_offset": 0, 00:20:20.474 "data_size": 65536 00:20:20.474 }, 00:20:20.474 { 00:20:20.474 "name": "BaseBdev2", 00:20:20.474 "uuid": "2e9d15b8-1286-5582-914c-e85d42c1597a", 00:20:20.474 "is_configured": true, 00:20:20.474 "data_offset": 0, 00:20:20.474 "data_size": 65536 00:20:20.474 }, 00:20:20.474 { 00:20:20.474 "name": "BaseBdev3", 00:20:20.474 "uuid": "a840516c-f148-51b1-9e6b-a415f5dc83db", 00:20:20.474 "is_configured": true, 00:20:20.474 "data_offset": 0, 00:20:20.474 "data_size": 65536 00:20:20.474 }, 00:20:20.474 { 00:20:20.474 "name": "BaseBdev4", 00:20:20.474 "uuid": "e70e4b38-0d0c-5776-acb5-231012b85012", 00:20:20.474 "is_configured": true, 00:20:20.474 "data_offset": 0, 00:20:20.474 "data_size": 65536 00:20:20.474 } 00:20:20.474 ] 00:20:20.474 }' 00:20:20.474 18:23:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:20.474 18:23:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.474 18:23:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:20.475 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.475 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:20:20.475 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:20:20.475 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:20:20.475 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:20:20.475 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:20.734 [2024-07-24 18:23:29.189360] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:20.734 [2024-07-24 18:23:29.271577] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x223a250 00:20:20.734 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:20:20.734 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:20:20.734 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.734 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:20.734 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:20.734 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:20.734 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:20.734 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.734 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.993 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:20.993 "name": "raid_bdev1", 00:20:20.993 "uuid": "1456ddb9-3451-428a-9524-52d2704a85f9", 00:20:20.993 "strip_size_kb": 0, 00:20:20.993 "state": "online", 00:20:20.993 "raid_level": "raid1", 00:20:20.993 "superblock": false, 00:20:20.993 "num_base_bdevs": 4, 00:20:20.993 "num_base_bdevs_discovered": 3, 00:20:20.993 "num_base_bdevs_operational": 3, 00:20:20.993 "process": { 00:20:20.993 "type": "rebuild", 00:20:20.993 "target": "spare", 00:20:20.993 "progress": { 00:20:20.993 "blocks": 32768, 00:20:20.993 "percent": 50 00:20:20.993 } 00:20:20.993 }, 00:20:20.993 "base_bdevs_list": [ 00:20:20.993 { 00:20:20.993 "name": "spare", 00:20:20.993 "uuid": "51fc3f29-a676-5ea3-a7b5-7debcc24b133", 00:20:20.993 "is_configured": true, 00:20:20.993 "data_offset": 0, 00:20:20.993 "data_size": 65536 00:20:20.993 }, 00:20:20.993 { 00:20:20.993 "name": null, 00:20:20.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.993 "is_configured": false, 00:20:20.993 "data_offset": 0, 00:20:20.993 "data_size": 65536 00:20:20.993 }, 00:20:20.993 { 00:20:20.993 "name": "BaseBdev3", 00:20:20.993 "uuid": "a840516c-f148-51b1-9e6b-a415f5dc83db", 00:20:20.993 "is_configured": true, 00:20:20.993 "data_offset": 0, 00:20:20.993 "data_size": 65536 00:20:20.993 }, 00:20:20.993 { 00:20:20.993 "name": "BaseBdev4", 00:20:20.993 "uuid": "e70e4b38-0d0c-5776-acb5-231012b85012", 00:20:20.993 "is_configured": true, 00:20:20.993 "data_offset": 0, 00:20:20.994 "data_size": 65536 00:20:20.994 } 00:20:20.994 ] 00:20:20.994 }' 00:20:20.994 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:20.994 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.994 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:20.994 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.994 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=673 00:20:20.994 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:20.994 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.994 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:20.994 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:20.994 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:20.994 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:20.994 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.994 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.253 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:21.253 "name": "raid_bdev1", 00:20:21.253 "uuid": "1456ddb9-3451-428a-9524-52d2704a85f9", 00:20:21.253 "strip_size_kb": 0, 00:20:21.253 "state": "online", 00:20:21.253 "raid_level": "raid1", 00:20:21.253 "superblock": false, 00:20:21.253 "num_base_bdevs": 4, 00:20:21.253 "num_base_bdevs_discovered": 3, 00:20:21.253 "num_base_bdevs_operational": 3, 00:20:21.253 "process": { 00:20:21.253 "type": "rebuild", 00:20:21.253 "target": "spare", 00:20:21.253 "progress": { 00:20:21.253 "blocks": 38912, 00:20:21.253 "percent": 59 00:20:21.253 } 00:20:21.253 }, 00:20:21.253 "base_bdevs_list": [ 00:20:21.253 { 00:20:21.253 "name": "spare", 00:20:21.253 "uuid": "51fc3f29-a676-5ea3-a7b5-7debcc24b133", 00:20:21.253 "is_configured": true, 00:20:21.253 "data_offset": 0, 00:20:21.253 "data_size": 65536 00:20:21.253 }, 00:20:21.253 { 00:20:21.253 "name": null, 00:20:21.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.253 "is_configured": false, 00:20:21.253 "data_offset": 0, 00:20:21.253 "data_size": 65536 00:20:21.253 }, 00:20:21.253 { 00:20:21.253 "name": "BaseBdev3", 00:20:21.253 "uuid": "a840516c-f148-51b1-9e6b-a415f5dc83db", 00:20:21.253 "is_configured": true, 00:20:21.253 "data_offset": 0, 00:20:21.253 "data_size": 65536 00:20:21.253 }, 00:20:21.253 { 00:20:21.253 "name": "BaseBdev4", 00:20:21.253 "uuid": "e70e4b38-0d0c-5776-acb5-231012b85012", 00:20:21.253 "is_configured": true, 00:20:21.253 "data_offset": 0, 00:20:21.253 "data_size": 65536 00:20:21.253 } 00:20:21.253 ] 00:20:21.253 }' 00:20:21.253 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:21.253 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.253 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:21.253 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.253 18:23:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:22.202 18:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:22.202 18:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.202 18:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:22.202 18:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:22.202 18:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:22.202 18:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:22.461 18:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.461 18:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.461 18:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:22.461 "name": "raid_bdev1", 00:20:22.461 "uuid": "1456ddb9-3451-428a-9524-52d2704a85f9", 00:20:22.461 "strip_size_kb": 0, 00:20:22.461 "state": "online", 00:20:22.461 "raid_level": "raid1", 00:20:22.461 "superblock": false, 00:20:22.461 "num_base_bdevs": 4, 00:20:22.461 "num_base_bdevs_discovered": 3, 00:20:22.461 "num_base_bdevs_operational": 3, 00:20:22.461 "process": { 00:20:22.461 "type": "rebuild", 00:20:22.461 "target": "spare", 00:20:22.461 "progress": { 00:20:22.461 "blocks": 63488, 00:20:22.461 "percent": 96 00:20:22.461 } 00:20:22.461 }, 00:20:22.461 "base_bdevs_list": [ 00:20:22.461 { 00:20:22.461 "name": "spare", 00:20:22.461 "uuid": "51fc3f29-a676-5ea3-a7b5-7debcc24b133", 00:20:22.461 "is_configured": true, 00:20:22.461 "data_offset": 0, 00:20:22.461 "data_size": 65536 00:20:22.461 }, 00:20:22.461 { 00:20:22.461 "name": null, 00:20:22.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.461 "is_configured": false, 00:20:22.461 "data_offset": 0, 00:20:22.461 "data_size": 65536 00:20:22.461 }, 00:20:22.461 { 00:20:22.461 "name": "BaseBdev3", 00:20:22.461 "uuid": "a840516c-f148-51b1-9e6b-a415f5dc83db", 00:20:22.461 "is_configured": true, 00:20:22.461 "data_offset": 0, 00:20:22.461 "data_size": 65536 00:20:22.461 }, 00:20:22.461 { 00:20:22.461 "name": "BaseBdev4", 00:20:22.461 "uuid": "e70e4b38-0d0c-5776-acb5-231012b85012", 00:20:22.461 "is_configured": true, 00:20:22.461 "data_offset": 0, 00:20:22.461 "data_size": 65536 00:20:22.461 } 00:20:22.461 ] 00:20:22.461 }' 00:20:22.461 18:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:22.461 [2024-07-24 18:23:30.983280] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:22.461 [2024-07-24 18:23:30.983320] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:22.461 [2024-07-24 18:23:30.983346] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.461 18:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.461 18:23:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:22.461 18:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.461 18:23:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:23.870 "name": "raid_bdev1", 00:20:23.870 "uuid": "1456ddb9-3451-428a-9524-52d2704a85f9", 00:20:23.870 "strip_size_kb": 0, 00:20:23.870 "state": "online", 00:20:23.870 "raid_level": "raid1", 00:20:23.870 "superblock": false, 00:20:23.870 "num_base_bdevs": 4, 00:20:23.870 "num_base_bdevs_discovered": 3, 00:20:23.870 "num_base_bdevs_operational": 3, 00:20:23.870 "base_bdevs_list": [ 00:20:23.870 { 00:20:23.870 "name": "spare", 00:20:23.870 "uuid": "51fc3f29-a676-5ea3-a7b5-7debcc24b133", 00:20:23.870 "is_configured": true, 00:20:23.870 "data_offset": 0, 00:20:23.870 "data_size": 65536 00:20:23.870 }, 00:20:23.870 { 00:20:23.870 "name": null, 00:20:23.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.870 "is_configured": false, 00:20:23.870 "data_offset": 0, 00:20:23.870 "data_size": 65536 00:20:23.870 }, 00:20:23.870 { 00:20:23.870 "name": "BaseBdev3", 00:20:23.870 "uuid": "a840516c-f148-51b1-9e6b-a415f5dc83db", 00:20:23.870 "is_configured": true, 00:20:23.870 "data_offset": 0, 00:20:23.870 "data_size": 65536 00:20:23.870 }, 00:20:23.870 { 00:20:23.870 "name": "BaseBdev4", 00:20:23.870 "uuid": "e70e4b38-0d0c-5776-acb5-231012b85012", 00:20:23.870 "is_configured": true, 00:20:23.870 "data_offset": 0, 00:20:23.870 "data_size": 65536 00:20:23.870 } 00:20:23.870 ] 00:20:23.870 }' 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.870 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.129 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:24.129 "name": "raid_bdev1", 00:20:24.129 "uuid": "1456ddb9-3451-428a-9524-52d2704a85f9", 00:20:24.129 "strip_size_kb": 0, 00:20:24.129 "state": "online", 00:20:24.129 "raid_level": "raid1", 00:20:24.129 "superblock": false, 00:20:24.129 "num_base_bdevs": 4, 00:20:24.129 "num_base_bdevs_discovered": 3, 00:20:24.129 "num_base_bdevs_operational": 3, 00:20:24.129 "base_bdevs_list": [ 00:20:24.129 { 00:20:24.129 "name": "spare", 00:20:24.129 "uuid": "51fc3f29-a676-5ea3-a7b5-7debcc24b133", 00:20:24.129 "is_configured": true, 00:20:24.129 "data_offset": 0, 00:20:24.129 "data_size": 65536 00:20:24.129 }, 00:20:24.129 { 00:20:24.129 "name": null, 00:20:24.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.129 "is_configured": false, 00:20:24.129 "data_offset": 0, 00:20:24.129 "data_size": 65536 00:20:24.129 }, 00:20:24.129 { 00:20:24.129 "name": "BaseBdev3", 00:20:24.129 "uuid": "a840516c-f148-51b1-9e6b-a415f5dc83db", 00:20:24.129 "is_configured": true, 00:20:24.129 "data_offset": 0, 00:20:24.129 "data_size": 65536 00:20:24.129 }, 00:20:24.129 { 00:20:24.129 "name": "BaseBdev4", 00:20:24.129 "uuid": "e70e4b38-0d0c-5776-acb5-231012b85012", 00:20:24.129 "is_configured": true, 00:20:24.129 "data_offset": 0, 00:20:24.129 "data_size": 65536 00:20:24.129 } 00:20:24.129 ] 00:20:24.129 }' 00:20:24.129 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:24.129 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:24.129 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:24.130 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:24.130 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:24.130 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:24.130 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:24.130 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:24.130 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:24.130 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:24.130 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.130 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.130 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.130 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.130 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.130 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.389 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.389 "name": "raid_bdev1", 00:20:24.389 "uuid": "1456ddb9-3451-428a-9524-52d2704a85f9", 00:20:24.389 "strip_size_kb": 0, 00:20:24.389 "state": "online", 00:20:24.389 "raid_level": "raid1", 00:20:24.389 "superblock": false, 00:20:24.389 "num_base_bdevs": 4, 00:20:24.389 "num_base_bdevs_discovered": 3, 00:20:24.389 "num_base_bdevs_operational": 3, 00:20:24.389 "base_bdevs_list": [ 00:20:24.389 { 00:20:24.389 "name": "spare", 00:20:24.389 "uuid": "51fc3f29-a676-5ea3-a7b5-7debcc24b133", 00:20:24.389 "is_configured": true, 00:20:24.389 "data_offset": 0, 00:20:24.389 "data_size": 65536 00:20:24.389 }, 00:20:24.389 { 00:20:24.389 "name": null, 00:20:24.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.389 "is_configured": false, 00:20:24.389 "data_offset": 0, 00:20:24.389 "data_size": 65536 00:20:24.389 }, 00:20:24.389 { 00:20:24.389 "name": "BaseBdev3", 00:20:24.389 "uuid": "a840516c-f148-51b1-9e6b-a415f5dc83db", 00:20:24.389 "is_configured": true, 00:20:24.389 "data_offset": 0, 00:20:24.389 "data_size": 65536 00:20:24.389 }, 00:20:24.389 { 00:20:24.389 "name": "BaseBdev4", 00:20:24.389 "uuid": "e70e4b38-0d0c-5776-acb5-231012b85012", 00:20:24.389 "is_configured": true, 00:20:24.389 "data_offset": 0, 00:20:24.389 "data_size": 65536 00:20:24.389 } 00:20:24.389 ] 00:20:24.389 }' 00:20:24.389 18:23:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.389 18:23:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.960 18:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:24.960 [2024-07-24 18:23:33.409117] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.960 [2024-07-24 18:23:33.409137] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.960 [2024-07-24 18:23:33.409180] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.960 [2024-07-24 18:23:33.409233] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.960 [2024-07-24 18:23:33.409241] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x223e370 name raid_bdev1, state offline 00:20:24.960 18:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.960 18:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:25.220 /dev/nbd0 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:25.220 1+0 records in 00:20:25.220 1+0 records out 00:20:25.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000115479 s, 35.5 MB/s 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:25.220 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:25.479 /dev/nbd1 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:25.479 1+0 records in 00:20:25.479 1+0 records out 00:20:25.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277412 s, 14.8 MB/s 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:20:25.479 18:23:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:25.479 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:25.479 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:20:25.479 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:25.479 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:25.479 18:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:25.479 18:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:25.479 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:25.479 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:25.479 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:25.479 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:25.479 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:25.479 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:25.737 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:25.737 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:25.737 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:25.737 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.737 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.737 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:25.737 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:25.737 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.737 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:25.737 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 2273643 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 2273643 ']' 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 2273643 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2273643 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2273643' 00:20:25.997 killing process with pid 2273643 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 2273643 00:20:25.997 Received shutdown signal, test time was about 60.000000 seconds 00:20:25.997 00:20:25.997 Latency(us) 00:20:25.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.997 =================================================================================================================== 00:20:25.997 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:25.997 [2024-07-24 18:23:34.527603] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:25.997 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 2273643 00:20:25.997 [2024-07-24 18:23:34.564797] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:20:26.257 00:20:26.257 real 0m19.081s 00:20:26.257 user 0m25.916s 00:20:26.257 sys 0m3.700s 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.257 ************************************ 00:20:26.257 END TEST raid_rebuild_test 00:20:26.257 ************************************ 00:20:26.257 18:23:34 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:20:26.257 18:23:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:26.257 18:23:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:26.257 18:23:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:26.257 ************************************ 00:20:26.257 START TEST raid_rebuild_test_sb 00:20:26.257 ************************************ 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=2277227 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 2277227 /var/tmp/spdk-raid.sock 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 2277227 ']' 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:26.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:26.257 18:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.516 [2024-07-24 18:23:34.886997] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:20:26.516 [2024-07-24 18:23:34.887044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277227 ] 00:20:26.516 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:26.516 Zero copy mechanism will not be used. 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:01.0 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:01.1 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:01.2 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:01.3 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:01.4 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:01.5 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:01.6 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:01.7 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:02.0 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:02.1 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:02.2 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:02.3 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:02.4 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:02.5 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:02.6 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b3:02.7 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b5:01.0 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b5:01.1 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b5:01.2 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b5:01.3 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b5:01.4 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b5:01.5 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b5:01.6 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b5:01.7 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b5:02.0 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b5:02.1 cannot be used 00:20:26.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.516 EAL: Requested device 0000:b5:02.2 cannot be used 00:20:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.517 EAL: Requested device 0000:b5:02.3 cannot be used 00:20:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.517 EAL: Requested device 0000:b5:02.4 cannot be used 00:20:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.517 EAL: Requested device 0000:b5:02.5 cannot be used 00:20:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.517 EAL: Requested device 0000:b5:02.6 cannot be used 00:20:26.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:26.517 EAL: Requested device 0000:b5:02.7 cannot be used 00:20:26.517 [2024-07-24 18:23:34.979092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.517 [2024-07-24 18:23:35.052115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.517 [2024-07-24 18:23:35.111154] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:26.517 [2024-07-24 18:23:35.111178] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.454 18:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.454 18:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:20:27.454 18:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:27.454 18:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:27.454 BaseBdev1_malloc 00:20:27.454 18:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:27.454 [2024-07-24 18:23:35.994850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:27.454 [2024-07-24 18:23:35.994891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.454 [2024-07-24 18:23:35.994905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dc5370 00:20:27.454 [2024-07-24 18:23:35.994913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.454 [2024-07-24 18:23:35.995943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.454 [2024-07-24 18:23:35.995964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:27.454 BaseBdev1 00:20:27.454 18:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:27.454 18:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:27.713 BaseBdev2_malloc 00:20:27.713 18:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:27.972 [2024-07-24 18:23:36.347453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:27.972 [2024-07-24 18:23:36.347484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.972 [2024-07-24 18:23:36.347496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f68e70 00:20:27.972 [2024-07-24 18:23:36.347504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.972 [2024-07-24 18:23:36.348478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.972 [2024-07-24 18:23:36.348500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:27.972 BaseBdev2 00:20:27.972 18:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:27.972 18:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:27.972 BaseBdev3_malloc 00:20:27.972 18:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:28.231 [2024-07-24 18:23:36.707761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:28.231 [2024-07-24 18:23:36.707790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.231 [2024-07-24 18:23:36.707802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5f160 00:20:28.231 [2024-07-24 18:23:36.707825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.231 [2024-07-24 18:23:36.708753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.231 [2024-07-24 18:23:36.708773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:28.231 BaseBdev3 00:20:28.231 18:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:28.231 18:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:28.490 BaseBdev4_malloc 00:20:28.490 18:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:28.490 [2024-07-24 18:23:37.052016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:28.490 [2024-07-24 18:23:37.052044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.490 [2024-07-24 18:23:37.052056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5fa80 00:20:28.490 [2024-07-24 18:23:37.052063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.490 [2024-07-24 18:23:37.053006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.490 [2024-07-24 18:23:37.053025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:28.490 BaseBdev4 00:20:28.490 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:28.749 spare_malloc 00:20:28.749 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:29.008 spare_delay 00:20:29.008 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:29.008 [2024-07-24 18:23:37.540538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:29.008 [2024-07-24 18:23:37.540565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.008 [2024-07-24 18:23:37.540576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dbeb70 00:20:29.008 [2024-07-24 18:23:37.540583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.008 [2024-07-24 18:23:37.541507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.008 [2024-07-24 18:23:37.541527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:29.008 spare 00:20:29.008 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:29.268 [2024-07-24 18:23:37.700978] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:29.268 [2024-07-24 18:23:37.701715] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:29.268 [2024-07-24 18:23:37.701748] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:29.268 [2024-07-24 18:23:37.701776] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:29.268 [2024-07-24 18:23:37.701900] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1dc1370 00:20:29.268 [2024-07-24 18:23:37.701907] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:29.268 [2024-07-24 18:23:37.702016] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1dc1340 00:20:29.268 [2024-07-24 18:23:37.702107] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1dc1370 00:20:29.268 [2024-07-24 18:23:37.702113] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1dc1370 00:20:29.268 [2024-07-24 18:23:37.702169] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.268 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:29.268 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:29.268 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:29.268 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:29.268 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:29.268 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:29.268 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.268 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.268 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.268 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.268 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.268 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.527 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.527 "name": "raid_bdev1", 00:20:29.527 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:29.527 "strip_size_kb": 0, 00:20:29.527 "state": "online", 00:20:29.527 "raid_level": "raid1", 00:20:29.527 "superblock": true, 00:20:29.527 "num_base_bdevs": 4, 00:20:29.527 "num_base_bdevs_discovered": 4, 00:20:29.527 "num_base_bdevs_operational": 4, 00:20:29.527 "base_bdevs_list": [ 00:20:29.527 { 00:20:29.527 "name": "BaseBdev1", 00:20:29.527 "uuid": "ecc41c21-6172-53d8-912c-6285ea9e191b", 00:20:29.527 "is_configured": true, 00:20:29.527 "data_offset": 2048, 00:20:29.527 "data_size": 63488 00:20:29.527 }, 00:20:29.527 { 00:20:29.527 "name": "BaseBdev2", 00:20:29.527 "uuid": "20326198-6b76-5d15-8147-44dee979c8b9", 00:20:29.527 "is_configured": true, 00:20:29.527 "data_offset": 2048, 00:20:29.527 "data_size": 63488 00:20:29.527 }, 00:20:29.527 { 00:20:29.527 "name": "BaseBdev3", 00:20:29.527 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:29.527 "is_configured": true, 00:20:29.527 "data_offset": 2048, 00:20:29.527 "data_size": 63488 00:20:29.527 }, 00:20:29.527 { 00:20:29.527 "name": "BaseBdev4", 00:20:29.527 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:29.527 "is_configured": true, 00:20:29.527 "data_offset": 2048, 00:20:29.527 "data_size": 63488 00:20:29.527 } 00:20:29.527 ] 00:20:29.527 }' 00:20:29.527 18:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.527 18:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.096 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:30.096 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:20:30.096 [2024-07-24 18:23:38.551347] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:30.096 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:20:30.096 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.096 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:30.355 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:20:30.355 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:20:30.355 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:20:30.355 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:20:30.355 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:30.355 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:30.355 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:30.355 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:30.356 [2024-07-24 18:23:38.904109] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f5e9a0 00:20:30.356 /dev/nbd0 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:30.356 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:30.356 1+0 records in 00:20:30.356 1+0 records out 00:20:30.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025635 s, 16.0 MB/s 00:20:30.615 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:30.615 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:20:30.615 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:30.615 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:30.615 18:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:20:30.615 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:30.615 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:30.615 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:20:30.615 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:20:30.615 18:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:35.891 63488+0 records in 00:20:35.891 63488+0 records out 00:20:35.891 32505856 bytes (33 MB, 31 MiB) copied, 4.94529 s, 6.6 MB/s 00:20:35.891 18:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:35.891 18:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:35.891 18:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:35.891 18:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:35.891 18:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:35.891 18:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:35.891 18:23:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:35.891 [2024-07-24 18:23:44.111111] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:35.891 [2024-07-24 18:23:44.279580] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.891 "name": "raid_bdev1", 00:20:35.891 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:35.891 "strip_size_kb": 0, 00:20:35.891 "state": "online", 00:20:35.891 "raid_level": "raid1", 00:20:35.891 "superblock": true, 00:20:35.891 "num_base_bdevs": 4, 00:20:35.891 "num_base_bdevs_discovered": 3, 00:20:35.891 "num_base_bdevs_operational": 3, 00:20:35.891 "base_bdevs_list": [ 00:20:35.891 { 00:20:35.891 "name": null, 00:20:35.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.891 "is_configured": false, 00:20:35.891 "data_offset": 2048, 00:20:35.891 "data_size": 63488 00:20:35.891 }, 00:20:35.891 { 00:20:35.891 "name": "BaseBdev2", 00:20:35.891 "uuid": "20326198-6b76-5d15-8147-44dee979c8b9", 00:20:35.891 "is_configured": true, 00:20:35.891 "data_offset": 2048, 00:20:35.891 "data_size": 63488 00:20:35.891 }, 00:20:35.891 { 00:20:35.891 "name": "BaseBdev3", 00:20:35.891 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:35.891 "is_configured": true, 00:20:35.891 "data_offset": 2048, 00:20:35.891 "data_size": 63488 00:20:35.891 }, 00:20:35.891 { 00:20:35.891 "name": "BaseBdev4", 00:20:35.891 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:35.891 "is_configured": true, 00:20:35.891 "data_offset": 2048, 00:20:35.891 "data_size": 63488 00:20:35.891 } 00:20:35.891 ] 00:20:35.891 }' 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.891 18:23:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.459 18:23:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:36.718 [2024-07-24 18:23:45.101708] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:36.718 [2024-07-24 18:23:45.105347] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1dc1340 00:20:36.718 [2024-07-24 18:23:45.106923] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:36.718 18:23:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:20:37.657 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.657 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:37.657 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:37.657 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:37.657 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:37.657 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.657 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.917 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:37.917 "name": "raid_bdev1", 00:20:37.917 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:37.917 "strip_size_kb": 0, 00:20:37.917 "state": "online", 00:20:37.917 "raid_level": "raid1", 00:20:37.917 "superblock": true, 00:20:37.917 "num_base_bdevs": 4, 00:20:37.917 "num_base_bdevs_discovered": 4, 00:20:37.917 "num_base_bdevs_operational": 4, 00:20:37.917 "process": { 00:20:37.917 "type": "rebuild", 00:20:37.917 "target": "spare", 00:20:37.917 "progress": { 00:20:37.917 "blocks": 22528, 00:20:37.917 "percent": 35 00:20:37.917 } 00:20:37.917 }, 00:20:37.917 "base_bdevs_list": [ 00:20:37.917 { 00:20:37.917 "name": "spare", 00:20:37.917 "uuid": "58fa3e0d-a563-536c-beda-c48350a05a64", 00:20:37.917 "is_configured": true, 00:20:37.917 "data_offset": 2048, 00:20:37.917 "data_size": 63488 00:20:37.917 }, 00:20:37.917 { 00:20:37.917 "name": "BaseBdev2", 00:20:37.917 "uuid": "20326198-6b76-5d15-8147-44dee979c8b9", 00:20:37.917 "is_configured": true, 00:20:37.917 "data_offset": 2048, 00:20:37.917 "data_size": 63488 00:20:37.917 }, 00:20:37.917 { 00:20:37.917 "name": "BaseBdev3", 00:20:37.917 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:37.917 "is_configured": true, 00:20:37.917 "data_offset": 2048, 00:20:37.917 "data_size": 63488 00:20:37.917 }, 00:20:37.917 { 00:20:37.917 "name": "BaseBdev4", 00:20:37.917 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:37.917 "is_configured": true, 00:20:37.917 "data_offset": 2048, 00:20:37.917 "data_size": 63488 00:20:37.917 } 00:20:37.917 ] 00:20:37.917 }' 00:20:37.917 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:37.917 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.917 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:37.917 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.917 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:38.177 [2024-07-24 18:23:46.519089] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:38.177 [2024-07-24 18:23:46.617368] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:38.177 [2024-07-24 18:23:46.617397] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.177 [2024-07-24 18:23:46.617407] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:38.177 [2024-07-24 18:23:46.617412] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:38.177 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:38.177 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:38.177 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:38.177 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:38.177 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:38.177 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:38.177 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.177 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.177 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.177 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.177 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.177 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.436 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.436 "name": "raid_bdev1", 00:20:38.436 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:38.436 "strip_size_kb": 0, 00:20:38.436 "state": "online", 00:20:38.436 "raid_level": "raid1", 00:20:38.436 "superblock": true, 00:20:38.436 "num_base_bdevs": 4, 00:20:38.436 "num_base_bdevs_discovered": 3, 00:20:38.436 "num_base_bdevs_operational": 3, 00:20:38.436 "base_bdevs_list": [ 00:20:38.436 { 00:20:38.436 "name": null, 00:20:38.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.436 "is_configured": false, 00:20:38.436 "data_offset": 2048, 00:20:38.436 "data_size": 63488 00:20:38.436 }, 00:20:38.436 { 00:20:38.436 "name": "BaseBdev2", 00:20:38.436 "uuid": "20326198-6b76-5d15-8147-44dee979c8b9", 00:20:38.436 "is_configured": true, 00:20:38.436 "data_offset": 2048, 00:20:38.436 "data_size": 63488 00:20:38.436 }, 00:20:38.436 { 00:20:38.436 "name": "BaseBdev3", 00:20:38.436 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:38.436 "is_configured": true, 00:20:38.436 "data_offset": 2048, 00:20:38.436 "data_size": 63488 00:20:38.436 }, 00:20:38.436 { 00:20:38.436 "name": "BaseBdev4", 00:20:38.436 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:38.436 "is_configured": true, 00:20:38.436 "data_offset": 2048, 00:20:38.436 "data_size": 63488 00:20:38.436 } 00:20:38.436 ] 00:20:38.436 }' 00:20:38.436 18:23:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.436 18:23:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.695 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:38.695 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:38.695 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:38.695 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:38.695 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:38.695 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.695 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.955 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:38.955 "name": "raid_bdev1", 00:20:38.955 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:38.955 "strip_size_kb": 0, 00:20:38.955 "state": "online", 00:20:38.955 "raid_level": "raid1", 00:20:38.955 "superblock": true, 00:20:38.955 "num_base_bdevs": 4, 00:20:38.955 "num_base_bdevs_discovered": 3, 00:20:38.955 "num_base_bdevs_operational": 3, 00:20:38.955 "base_bdevs_list": [ 00:20:38.955 { 00:20:38.955 "name": null, 00:20:38.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.955 "is_configured": false, 00:20:38.955 "data_offset": 2048, 00:20:38.955 "data_size": 63488 00:20:38.955 }, 00:20:38.955 { 00:20:38.955 "name": "BaseBdev2", 00:20:38.955 "uuid": "20326198-6b76-5d15-8147-44dee979c8b9", 00:20:38.955 "is_configured": true, 00:20:38.955 "data_offset": 2048, 00:20:38.955 "data_size": 63488 00:20:38.955 }, 00:20:38.955 { 00:20:38.955 "name": "BaseBdev3", 00:20:38.955 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:38.955 "is_configured": true, 00:20:38.955 "data_offset": 2048, 00:20:38.955 "data_size": 63488 00:20:38.955 }, 00:20:38.955 { 00:20:38.955 "name": "BaseBdev4", 00:20:38.955 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:38.955 "is_configured": true, 00:20:38.955 "data_offset": 2048, 00:20:38.955 "data_size": 63488 00:20:38.955 } 00:20:38.955 ] 00:20:38.955 }' 00:20:38.955 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:38.955 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:38.955 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:38.955 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:38.955 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:39.214 [2024-07-24 18:23:47.699829] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:39.214 [2024-07-24 18:23:47.703404] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f5df60 00:20:39.214 [2024-07-24 18:23:47.704435] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:39.214 18:23:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:40.151 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.151 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:40.151 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:40.151 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:40.151 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:40.151 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.151 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.410 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:40.410 "name": "raid_bdev1", 00:20:40.411 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:40.411 "strip_size_kb": 0, 00:20:40.411 "state": "online", 00:20:40.411 "raid_level": "raid1", 00:20:40.411 "superblock": true, 00:20:40.411 "num_base_bdevs": 4, 00:20:40.411 "num_base_bdevs_discovered": 4, 00:20:40.411 "num_base_bdevs_operational": 4, 00:20:40.411 "process": { 00:20:40.411 "type": "rebuild", 00:20:40.411 "target": "spare", 00:20:40.411 "progress": { 00:20:40.411 "blocks": 22528, 00:20:40.411 "percent": 35 00:20:40.411 } 00:20:40.411 }, 00:20:40.411 "base_bdevs_list": [ 00:20:40.411 { 00:20:40.411 "name": "spare", 00:20:40.411 "uuid": "58fa3e0d-a563-536c-beda-c48350a05a64", 00:20:40.411 "is_configured": true, 00:20:40.411 "data_offset": 2048, 00:20:40.411 "data_size": 63488 00:20:40.411 }, 00:20:40.411 { 00:20:40.411 "name": "BaseBdev2", 00:20:40.411 "uuid": "20326198-6b76-5d15-8147-44dee979c8b9", 00:20:40.411 "is_configured": true, 00:20:40.411 "data_offset": 2048, 00:20:40.411 "data_size": 63488 00:20:40.411 }, 00:20:40.411 { 00:20:40.411 "name": "BaseBdev3", 00:20:40.411 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:40.411 "is_configured": true, 00:20:40.411 "data_offset": 2048, 00:20:40.411 "data_size": 63488 00:20:40.411 }, 00:20:40.411 { 00:20:40.411 "name": "BaseBdev4", 00:20:40.411 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:40.411 "is_configured": true, 00:20:40.411 "data_offset": 2048, 00:20:40.411 "data_size": 63488 00:20:40.411 } 00:20:40.411 ] 00:20:40.411 }' 00:20:40.411 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:40.411 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:40.411 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:40.411 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.411 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:20:40.411 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:20:40.411 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:20:40.411 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:20:40.411 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:20:40.411 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:20:40.411 18:23:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:40.670 [2024-07-24 18:23:49.132609] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:40.931 [2024-07-24 18:23:49.315104] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1f5df60 00:20:40.931 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:20:40.931 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:20:40.931 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.931 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:40.931 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:40.931 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:40.931 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:40.931 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.931 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.931 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:40.931 "name": "raid_bdev1", 00:20:40.931 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:40.931 "strip_size_kb": 0, 00:20:40.931 "state": "online", 00:20:40.931 "raid_level": "raid1", 00:20:40.931 "superblock": true, 00:20:40.931 "num_base_bdevs": 4, 00:20:40.931 "num_base_bdevs_discovered": 3, 00:20:40.931 "num_base_bdevs_operational": 3, 00:20:40.931 "process": { 00:20:40.931 "type": "rebuild", 00:20:40.932 "target": "spare", 00:20:40.932 "progress": { 00:20:40.932 "blocks": 32768, 00:20:40.932 "percent": 51 00:20:40.932 } 00:20:40.932 }, 00:20:40.932 "base_bdevs_list": [ 00:20:40.932 { 00:20:40.932 "name": "spare", 00:20:40.932 "uuid": "58fa3e0d-a563-536c-beda-c48350a05a64", 00:20:40.932 "is_configured": true, 00:20:40.932 "data_offset": 2048, 00:20:40.932 "data_size": 63488 00:20:40.932 }, 00:20:40.932 { 00:20:40.932 "name": null, 00:20:40.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.932 "is_configured": false, 00:20:40.932 "data_offset": 2048, 00:20:40.932 "data_size": 63488 00:20:40.932 }, 00:20:40.932 { 00:20:40.932 "name": "BaseBdev3", 00:20:40.932 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:40.932 "is_configured": true, 00:20:40.932 "data_offset": 2048, 00:20:40.932 "data_size": 63488 00:20:40.932 }, 00:20:40.932 { 00:20:40.932 "name": "BaseBdev4", 00:20:40.932 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:40.932 "is_configured": true, 00:20:40.932 "data_offset": 2048, 00:20:40.932 "data_size": 63488 00:20:40.932 } 00:20:40.932 ] 00:20:40.932 }' 00:20:40.932 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:41.252 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:41.252 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:41.252 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:41.252 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=693 00:20:41.252 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:41.252 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:41.252 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:41.252 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:41.252 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:41.252 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:41.252 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.252 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.252 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:41.252 "name": "raid_bdev1", 00:20:41.252 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:41.252 "strip_size_kb": 0, 00:20:41.252 "state": "online", 00:20:41.252 "raid_level": "raid1", 00:20:41.252 "superblock": true, 00:20:41.252 "num_base_bdevs": 4, 00:20:41.252 "num_base_bdevs_discovered": 3, 00:20:41.252 "num_base_bdevs_operational": 3, 00:20:41.252 "process": { 00:20:41.252 "type": "rebuild", 00:20:41.252 "target": "spare", 00:20:41.252 "progress": { 00:20:41.252 "blocks": 38912, 00:20:41.252 "percent": 61 00:20:41.252 } 00:20:41.252 }, 00:20:41.252 "base_bdevs_list": [ 00:20:41.252 { 00:20:41.252 "name": "spare", 00:20:41.252 "uuid": "58fa3e0d-a563-536c-beda-c48350a05a64", 00:20:41.252 "is_configured": true, 00:20:41.252 "data_offset": 2048, 00:20:41.252 "data_size": 63488 00:20:41.252 }, 00:20:41.252 { 00:20:41.252 "name": null, 00:20:41.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.252 "is_configured": false, 00:20:41.252 "data_offset": 2048, 00:20:41.252 "data_size": 63488 00:20:41.252 }, 00:20:41.252 { 00:20:41.252 "name": "BaseBdev3", 00:20:41.252 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:41.252 "is_configured": true, 00:20:41.252 "data_offset": 2048, 00:20:41.253 "data_size": 63488 00:20:41.253 }, 00:20:41.253 { 00:20:41.253 "name": "BaseBdev4", 00:20:41.253 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:41.253 "is_configured": true, 00:20:41.253 "data_offset": 2048, 00:20:41.253 "data_size": 63488 00:20:41.253 } 00:20:41.253 ] 00:20:41.253 }' 00:20:41.253 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:41.253 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:41.253 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:41.512 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:41.512 18:23:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:42.451 18:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:42.451 18:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.451 18:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:42.451 18:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:42.451 18:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:42.451 18:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:42.451 18:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.451 18:23:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.451 [2024-07-24 18:23:50.926155] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:42.451 [2024-07-24 18:23:50.926199] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:42.451 [2024-07-24 18:23:50.926289] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.451 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:42.451 "name": "raid_bdev1", 00:20:42.451 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:42.451 "strip_size_kb": 0, 00:20:42.451 "state": "online", 00:20:42.451 "raid_level": "raid1", 00:20:42.451 "superblock": true, 00:20:42.451 "num_base_bdevs": 4, 00:20:42.451 "num_base_bdevs_discovered": 3, 00:20:42.451 "num_base_bdevs_operational": 3, 00:20:42.451 "base_bdevs_list": [ 00:20:42.451 { 00:20:42.451 "name": "spare", 00:20:42.451 "uuid": "58fa3e0d-a563-536c-beda-c48350a05a64", 00:20:42.451 "is_configured": true, 00:20:42.451 "data_offset": 2048, 00:20:42.451 "data_size": 63488 00:20:42.451 }, 00:20:42.451 { 00:20:42.451 "name": null, 00:20:42.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.451 "is_configured": false, 00:20:42.451 "data_offset": 2048, 00:20:42.451 "data_size": 63488 00:20:42.451 }, 00:20:42.451 { 00:20:42.451 "name": "BaseBdev3", 00:20:42.451 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:42.451 "is_configured": true, 00:20:42.451 "data_offset": 2048, 00:20:42.451 "data_size": 63488 00:20:42.451 }, 00:20:42.451 { 00:20:42.451 "name": "BaseBdev4", 00:20:42.451 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:42.451 "is_configured": true, 00:20:42.451 "data_offset": 2048, 00:20:42.451 "data_size": 63488 00:20:42.451 } 00:20:42.451 ] 00:20:42.451 }' 00:20:42.451 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:42.710 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:42.710 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:42.710 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:20:42.710 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:20:42.710 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:42.710 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:42.710 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:42.710 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:42.710 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:42.710 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.710 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.710 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:42.710 "name": "raid_bdev1", 00:20:42.710 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:42.710 "strip_size_kb": 0, 00:20:42.710 "state": "online", 00:20:42.710 "raid_level": "raid1", 00:20:42.710 "superblock": true, 00:20:42.710 "num_base_bdevs": 4, 00:20:42.710 "num_base_bdevs_discovered": 3, 00:20:42.710 "num_base_bdevs_operational": 3, 00:20:42.710 "base_bdevs_list": [ 00:20:42.710 { 00:20:42.710 "name": "spare", 00:20:42.710 "uuid": "58fa3e0d-a563-536c-beda-c48350a05a64", 00:20:42.710 "is_configured": true, 00:20:42.710 "data_offset": 2048, 00:20:42.710 "data_size": 63488 00:20:42.710 }, 00:20:42.710 { 00:20:42.710 "name": null, 00:20:42.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.710 "is_configured": false, 00:20:42.710 "data_offset": 2048, 00:20:42.710 "data_size": 63488 00:20:42.710 }, 00:20:42.710 { 00:20:42.710 "name": "BaseBdev3", 00:20:42.711 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:42.711 "is_configured": true, 00:20:42.711 "data_offset": 2048, 00:20:42.711 "data_size": 63488 00:20:42.711 }, 00:20:42.711 { 00:20:42.711 "name": "BaseBdev4", 00:20:42.711 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:42.711 "is_configured": true, 00:20:42.711 "data_offset": 2048, 00:20:42.711 "data_size": 63488 00:20:42.711 } 00:20:42.711 ] 00:20:42.711 }' 00:20:42.711 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.970 "name": "raid_bdev1", 00:20:42.970 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:42.970 "strip_size_kb": 0, 00:20:42.970 "state": "online", 00:20:42.970 "raid_level": "raid1", 00:20:42.970 "superblock": true, 00:20:42.970 "num_base_bdevs": 4, 00:20:42.970 "num_base_bdevs_discovered": 3, 00:20:42.970 "num_base_bdevs_operational": 3, 00:20:42.970 "base_bdevs_list": [ 00:20:42.970 { 00:20:42.970 "name": "spare", 00:20:42.970 "uuid": "58fa3e0d-a563-536c-beda-c48350a05a64", 00:20:42.970 "is_configured": true, 00:20:42.970 "data_offset": 2048, 00:20:42.970 "data_size": 63488 00:20:42.970 }, 00:20:42.970 { 00:20:42.970 "name": null, 00:20:42.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.970 "is_configured": false, 00:20:42.970 "data_offset": 2048, 00:20:42.970 "data_size": 63488 00:20:42.970 }, 00:20:42.970 { 00:20:42.970 "name": "BaseBdev3", 00:20:42.970 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:42.970 "is_configured": true, 00:20:42.970 "data_offset": 2048, 00:20:42.970 "data_size": 63488 00:20:42.970 }, 00:20:42.970 { 00:20:42.970 "name": "BaseBdev4", 00:20:42.970 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:42.970 "is_configured": true, 00:20:42.970 "data_offset": 2048, 00:20:42.970 "data_size": 63488 00:20:42.970 } 00:20:42.970 ] 00:20:42.970 }' 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.970 18:23:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.539 18:23:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:43.799 [2024-07-24 18:23:52.153602] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.799 [2024-07-24 18:23:52.153624] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:43.799 [2024-07-24 18:23:52.153680] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.799 [2024-07-24 18:23:52.153731] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.799 [2024-07-24 18:23:52.153739] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1dc1370 name raid_bdev1, state offline 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:43.799 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:44.058 /dev/nbd0 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.058 1+0 records in 00:20:44.058 1+0 records out 00:20:44.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257239 s, 15.9 MB/s 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:44.058 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:44.318 /dev/nbd1 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.318 1+0 records in 00:20:44.318 1+0 records out 00:20:44.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272003 s, 15.1 MB/s 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:44.318 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:44.578 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:44.578 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:44.578 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:44.578 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:44.578 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:44.578 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:44.578 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:44.578 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:44.578 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:44.578 18:23:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:44.578 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:44.578 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:44.578 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:44.578 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:44.578 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:44.578 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:44.837 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:44.837 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:44.837 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:20:44.837 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:44.837 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:45.096 [2024-07-24 18:23:53.506581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:45.096 [2024-07-24 18:23:53.506618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.096 [2024-07-24 18:23:53.506637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5dd60 00:20:45.096 [2024-07-24 18:23:53.506645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.096 [2024-07-24 18:23:53.507866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.096 [2024-07-24 18:23:53.507888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:45.096 [2024-07-24 18:23:53.507948] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:45.096 [2024-07-24 18:23:53.507969] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:45.096 [2024-07-24 18:23:53.508042] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:45.096 [2024-07-24 18:23:53.508089] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:45.096 spare 00:20:45.096 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:45.096 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:45.096 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:45.096 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:45.096 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:45.096 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:45.096 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.096 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.096 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.096 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.096 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.096 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.096 [2024-07-24 18:23:53.608385] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1dc0720 00:20:45.096 [2024-07-24 18:23:53.608398] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:45.096 [2024-07-24 18:23:53.608545] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ac1fb0 00:20:45.096 [2024-07-24 18:23:53.608664] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1dc0720 00:20:45.096 [2024-07-24 18:23:53.608671] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1dc0720 00:20:45.096 [2024-07-24 18:23:53.608743] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.355 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.355 "name": "raid_bdev1", 00:20:45.355 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:45.355 "strip_size_kb": 0, 00:20:45.355 "state": "online", 00:20:45.355 "raid_level": "raid1", 00:20:45.355 "superblock": true, 00:20:45.355 "num_base_bdevs": 4, 00:20:45.355 "num_base_bdevs_discovered": 3, 00:20:45.355 "num_base_bdevs_operational": 3, 00:20:45.355 "base_bdevs_list": [ 00:20:45.355 { 00:20:45.355 "name": "spare", 00:20:45.355 "uuid": "58fa3e0d-a563-536c-beda-c48350a05a64", 00:20:45.355 "is_configured": true, 00:20:45.355 "data_offset": 2048, 00:20:45.355 "data_size": 63488 00:20:45.355 }, 00:20:45.355 { 00:20:45.355 "name": null, 00:20:45.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.355 "is_configured": false, 00:20:45.355 "data_offset": 2048, 00:20:45.355 "data_size": 63488 00:20:45.355 }, 00:20:45.355 { 00:20:45.355 "name": "BaseBdev3", 00:20:45.355 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:45.355 "is_configured": true, 00:20:45.355 "data_offset": 2048, 00:20:45.355 "data_size": 63488 00:20:45.355 }, 00:20:45.355 { 00:20:45.355 "name": "BaseBdev4", 00:20:45.355 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:45.355 "is_configured": true, 00:20:45.355 "data_offset": 2048, 00:20:45.355 "data_size": 63488 00:20:45.355 } 00:20:45.355 ] 00:20:45.355 }' 00:20:45.355 18:23:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.355 18:23:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.614 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:45.615 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:45.615 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:45.615 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:45.615 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:45.615 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.615 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.874 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:45.874 "name": "raid_bdev1", 00:20:45.874 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:45.874 "strip_size_kb": 0, 00:20:45.874 "state": "online", 00:20:45.874 "raid_level": "raid1", 00:20:45.874 "superblock": true, 00:20:45.874 "num_base_bdevs": 4, 00:20:45.874 "num_base_bdevs_discovered": 3, 00:20:45.874 "num_base_bdevs_operational": 3, 00:20:45.874 "base_bdevs_list": [ 00:20:45.874 { 00:20:45.874 "name": "spare", 00:20:45.874 "uuid": "58fa3e0d-a563-536c-beda-c48350a05a64", 00:20:45.874 "is_configured": true, 00:20:45.874 "data_offset": 2048, 00:20:45.874 "data_size": 63488 00:20:45.874 }, 00:20:45.874 { 00:20:45.874 "name": null, 00:20:45.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.874 "is_configured": false, 00:20:45.874 "data_offset": 2048, 00:20:45.874 "data_size": 63488 00:20:45.874 }, 00:20:45.874 { 00:20:45.874 "name": "BaseBdev3", 00:20:45.874 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:45.874 "is_configured": true, 00:20:45.874 "data_offset": 2048, 00:20:45.874 "data_size": 63488 00:20:45.874 }, 00:20:45.874 { 00:20:45.874 "name": "BaseBdev4", 00:20:45.874 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:45.874 "is_configured": true, 00:20:45.874 "data_offset": 2048, 00:20:45.874 "data_size": 63488 00:20:45.874 } 00:20:45.874 ] 00:20:45.874 }' 00:20:45.874 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:45.874 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:45.874 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:45.874 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:45.874 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.874 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:46.133 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:20:46.133 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:46.392 [2024-07-24 18:23:54.789961] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.392 "name": "raid_bdev1", 00:20:46.392 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:46.392 "strip_size_kb": 0, 00:20:46.392 "state": "online", 00:20:46.392 "raid_level": "raid1", 00:20:46.392 "superblock": true, 00:20:46.392 "num_base_bdevs": 4, 00:20:46.392 "num_base_bdevs_discovered": 2, 00:20:46.392 "num_base_bdevs_operational": 2, 00:20:46.392 "base_bdevs_list": [ 00:20:46.392 { 00:20:46.392 "name": null, 00:20:46.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.392 "is_configured": false, 00:20:46.392 "data_offset": 2048, 00:20:46.392 "data_size": 63488 00:20:46.392 }, 00:20:46.392 { 00:20:46.392 "name": null, 00:20:46.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.392 "is_configured": false, 00:20:46.392 "data_offset": 2048, 00:20:46.392 "data_size": 63488 00:20:46.392 }, 00:20:46.392 { 00:20:46.392 "name": "BaseBdev3", 00:20:46.392 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:46.392 "is_configured": true, 00:20:46.392 "data_offset": 2048, 00:20:46.392 "data_size": 63488 00:20:46.392 }, 00:20:46.392 { 00:20:46.392 "name": "BaseBdev4", 00:20:46.392 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:46.392 "is_configured": true, 00:20:46.392 "data_offset": 2048, 00:20:46.392 "data_size": 63488 00:20:46.392 } 00:20:46.392 ] 00:20:46.392 }' 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.392 18:23:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.961 18:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:47.220 [2024-07-24 18:23:55.624126] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:47.220 [2024-07-24 18:23:55.624252] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:47.220 [2024-07-24 18:23:55.624264] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:47.220 [2024-07-24 18:23:55.624286] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:47.220 [2024-07-24 18:23:55.627761] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e5c350 00:20:47.220 [2024-07-24 18:23:55.629316] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:47.220 18:23:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:20:48.157 18:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:48.157 18:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:48.157 18:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:48.157 18:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:48.157 18:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:48.157 18:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.157 18:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.416 18:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:48.416 "name": "raid_bdev1", 00:20:48.416 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:48.416 "strip_size_kb": 0, 00:20:48.416 "state": "online", 00:20:48.416 "raid_level": "raid1", 00:20:48.416 "superblock": true, 00:20:48.416 "num_base_bdevs": 4, 00:20:48.416 "num_base_bdevs_discovered": 3, 00:20:48.416 "num_base_bdevs_operational": 3, 00:20:48.416 "process": { 00:20:48.416 "type": "rebuild", 00:20:48.416 "target": "spare", 00:20:48.416 "progress": { 00:20:48.416 "blocks": 22528, 00:20:48.416 "percent": 35 00:20:48.416 } 00:20:48.416 }, 00:20:48.416 "base_bdevs_list": [ 00:20:48.416 { 00:20:48.416 "name": "spare", 00:20:48.416 "uuid": "58fa3e0d-a563-536c-beda-c48350a05a64", 00:20:48.416 "is_configured": true, 00:20:48.416 "data_offset": 2048, 00:20:48.416 "data_size": 63488 00:20:48.416 }, 00:20:48.416 { 00:20:48.416 "name": null, 00:20:48.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.416 "is_configured": false, 00:20:48.416 "data_offset": 2048, 00:20:48.416 "data_size": 63488 00:20:48.416 }, 00:20:48.416 { 00:20:48.416 "name": "BaseBdev3", 00:20:48.416 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:48.416 "is_configured": true, 00:20:48.416 "data_offset": 2048, 00:20:48.416 "data_size": 63488 00:20:48.416 }, 00:20:48.416 { 00:20:48.416 "name": "BaseBdev4", 00:20:48.416 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:48.416 "is_configured": true, 00:20:48.416 "data_offset": 2048, 00:20:48.416 "data_size": 63488 00:20:48.416 } 00:20:48.416 ] 00:20:48.416 }' 00:20:48.416 18:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:48.416 18:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:48.416 18:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:48.416 18:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.416 18:23:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:48.675 [2024-07-24 18:23:57.073650] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:48.675 [2024-07-24 18:23:57.139826] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:48.675 [2024-07-24 18:23:57.139862] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.675 [2024-07-24 18:23:57.139872] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:48.675 [2024-07-24 18:23:57.139877] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:48.675 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:48.675 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:48.675 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:48.675 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:48.675 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:48.676 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:48.676 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.676 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.676 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.676 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.676 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.676 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.935 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.935 "name": "raid_bdev1", 00:20:48.935 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:48.935 "strip_size_kb": 0, 00:20:48.935 "state": "online", 00:20:48.935 "raid_level": "raid1", 00:20:48.935 "superblock": true, 00:20:48.935 "num_base_bdevs": 4, 00:20:48.935 "num_base_bdevs_discovered": 2, 00:20:48.935 "num_base_bdevs_operational": 2, 00:20:48.935 "base_bdevs_list": [ 00:20:48.935 { 00:20:48.935 "name": null, 00:20:48.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.935 "is_configured": false, 00:20:48.935 "data_offset": 2048, 00:20:48.935 "data_size": 63488 00:20:48.935 }, 00:20:48.935 { 00:20:48.935 "name": null, 00:20:48.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.935 "is_configured": false, 00:20:48.935 "data_offset": 2048, 00:20:48.935 "data_size": 63488 00:20:48.935 }, 00:20:48.935 { 00:20:48.935 "name": "BaseBdev3", 00:20:48.935 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:48.935 "is_configured": true, 00:20:48.935 "data_offset": 2048, 00:20:48.935 "data_size": 63488 00:20:48.935 }, 00:20:48.935 { 00:20:48.935 "name": "BaseBdev4", 00:20:48.935 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:48.935 "is_configured": true, 00:20:48.935 "data_offset": 2048, 00:20:48.935 "data_size": 63488 00:20:48.935 } 00:20:48.935 ] 00:20:48.935 }' 00:20:48.935 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.935 18:23:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.503 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:49.503 [2024-07-24 18:23:57.965470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:49.503 [2024-07-24 18:23:57.965515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.504 [2024-07-24 18:23:57.965548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dc1de0 00:20:49.504 [2024-07-24 18:23:57.965557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.504 [2024-07-24 18:23:57.965865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.504 [2024-07-24 18:23:57.965880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:49.504 [2024-07-24 18:23:57.965941] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:49.504 [2024-07-24 18:23:57.965950] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:49.504 [2024-07-24 18:23:57.965957] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:49.504 [2024-07-24 18:23:57.965971] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.504 [2024-07-24 18:23:57.969521] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1dc06d0 00:20:49.504 spare 00:20:49.504 [2024-07-24 18:23:57.970598] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:49.504 18:23:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:20:50.441 18:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.441 18:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:50.441 18:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:50.441 18:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:50.441 18:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:50.441 18:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.441 18:23:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.700 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:50.700 "name": "raid_bdev1", 00:20:50.700 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:50.700 "strip_size_kb": 0, 00:20:50.700 "state": "online", 00:20:50.700 "raid_level": "raid1", 00:20:50.700 "superblock": true, 00:20:50.700 "num_base_bdevs": 4, 00:20:50.700 "num_base_bdevs_discovered": 3, 00:20:50.700 "num_base_bdevs_operational": 3, 00:20:50.700 "process": { 00:20:50.700 "type": "rebuild", 00:20:50.700 "target": "spare", 00:20:50.700 "progress": { 00:20:50.700 "blocks": 22528, 00:20:50.700 "percent": 35 00:20:50.700 } 00:20:50.700 }, 00:20:50.700 "base_bdevs_list": [ 00:20:50.700 { 00:20:50.700 "name": "spare", 00:20:50.700 "uuid": "58fa3e0d-a563-536c-beda-c48350a05a64", 00:20:50.700 "is_configured": true, 00:20:50.700 "data_offset": 2048, 00:20:50.700 "data_size": 63488 00:20:50.700 }, 00:20:50.700 { 00:20:50.700 "name": null, 00:20:50.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.700 "is_configured": false, 00:20:50.700 "data_offset": 2048, 00:20:50.700 "data_size": 63488 00:20:50.700 }, 00:20:50.700 { 00:20:50.700 "name": "BaseBdev3", 00:20:50.700 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:50.700 "is_configured": true, 00:20:50.700 "data_offset": 2048, 00:20:50.700 "data_size": 63488 00:20:50.700 }, 00:20:50.700 { 00:20:50.700 "name": "BaseBdev4", 00:20:50.700 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:50.700 "is_configured": true, 00:20:50.700 "data_offset": 2048, 00:20:50.700 "data_size": 63488 00:20:50.700 } 00:20:50.700 ] 00:20:50.700 }' 00:20:50.700 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:50.701 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.701 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:50.701 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.701 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:50.960 [2024-07-24 18:23:59.385549] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.960 [2024-07-24 18:23:59.481010] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:50.960 [2024-07-24 18:23:59.481042] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.960 [2024-07-24 18:23:59.481052] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.960 [2024-07-24 18:23:59.481073] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:50.960 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:50.960 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:50.960 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:50.960 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:50.960 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:50.960 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:50.960 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.960 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.960 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.960 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.960 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.960 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.219 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.219 "name": "raid_bdev1", 00:20:51.219 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:51.219 "strip_size_kb": 0, 00:20:51.219 "state": "online", 00:20:51.219 "raid_level": "raid1", 00:20:51.219 "superblock": true, 00:20:51.219 "num_base_bdevs": 4, 00:20:51.219 "num_base_bdevs_discovered": 2, 00:20:51.219 "num_base_bdevs_operational": 2, 00:20:51.219 "base_bdevs_list": [ 00:20:51.219 { 00:20:51.219 "name": null, 00:20:51.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.219 "is_configured": false, 00:20:51.219 "data_offset": 2048, 00:20:51.219 "data_size": 63488 00:20:51.219 }, 00:20:51.219 { 00:20:51.219 "name": null, 00:20:51.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.219 "is_configured": false, 00:20:51.219 "data_offset": 2048, 00:20:51.219 "data_size": 63488 00:20:51.219 }, 00:20:51.219 { 00:20:51.219 "name": "BaseBdev3", 00:20:51.219 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:51.219 "is_configured": true, 00:20:51.219 "data_offset": 2048, 00:20:51.219 "data_size": 63488 00:20:51.219 }, 00:20:51.219 { 00:20:51.219 "name": "BaseBdev4", 00:20:51.219 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:51.219 "is_configured": true, 00:20:51.219 "data_offset": 2048, 00:20:51.219 "data_size": 63488 00:20:51.219 } 00:20:51.219 ] 00:20:51.219 }' 00:20:51.219 18:23:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.219 18:23:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.787 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.787 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:51.787 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:51.787 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:51.787 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:51.788 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.788 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.788 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:51.788 "name": "raid_bdev1", 00:20:51.788 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:51.788 "strip_size_kb": 0, 00:20:51.788 "state": "online", 00:20:51.788 "raid_level": "raid1", 00:20:51.788 "superblock": true, 00:20:51.788 "num_base_bdevs": 4, 00:20:51.788 "num_base_bdevs_discovered": 2, 00:20:51.788 "num_base_bdevs_operational": 2, 00:20:51.788 "base_bdevs_list": [ 00:20:51.788 { 00:20:51.788 "name": null, 00:20:51.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.788 "is_configured": false, 00:20:51.788 "data_offset": 2048, 00:20:51.788 "data_size": 63488 00:20:51.788 }, 00:20:51.788 { 00:20:51.788 "name": null, 00:20:51.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.788 "is_configured": false, 00:20:51.788 "data_offset": 2048, 00:20:51.788 "data_size": 63488 00:20:51.788 }, 00:20:51.788 { 00:20:51.788 "name": "BaseBdev3", 00:20:51.788 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:51.788 "is_configured": true, 00:20:51.788 "data_offset": 2048, 00:20:51.788 "data_size": 63488 00:20:51.788 }, 00:20:51.788 { 00:20:51.788 "name": "BaseBdev4", 00:20:51.788 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:51.788 "is_configured": true, 00:20:51.788 "data_offset": 2048, 00:20:51.788 "data_size": 63488 00:20:51.788 } 00:20:51.788 ] 00:20:51.788 }' 00:20:51.788 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:51.788 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:51.788 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:51.788 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:51.788 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:52.047 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:52.306 [2024-07-24 18:24:00.679692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:52.306 [2024-07-24 18:24:00.679730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.306 [2024-07-24 18:24:00.679746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dc37d0 00:20:52.306 [2024-07-24 18:24:00.679754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.306 [2024-07-24 18:24:00.680027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.306 [2024-07-24 18:24:00.680039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:52.306 [2024-07-24 18:24:00.680089] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:52.306 [2024-07-24 18:24:00.680098] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:52.306 [2024-07-24 18:24:00.680105] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:52.306 BaseBdev1 00:20:52.307 18:24:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:20:53.244 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:53.245 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:53.245 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:53.245 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:53.245 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:53.245 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:53.245 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:53.245 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:53.245 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:53.245 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:53.245 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.245 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.503 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.503 "name": "raid_bdev1", 00:20:53.503 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:53.503 "strip_size_kb": 0, 00:20:53.503 "state": "online", 00:20:53.503 "raid_level": "raid1", 00:20:53.503 "superblock": true, 00:20:53.503 "num_base_bdevs": 4, 00:20:53.503 "num_base_bdevs_discovered": 2, 00:20:53.503 "num_base_bdevs_operational": 2, 00:20:53.503 "base_bdevs_list": [ 00:20:53.503 { 00:20:53.503 "name": null, 00:20:53.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.503 "is_configured": false, 00:20:53.504 "data_offset": 2048, 00:20:53.504 "data_size": 63488 00:20:53.504 }, 00:20:53.504 { 00:20:53.504 "name": null, 00:20:53.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.504 "is_configured": false, 00:20:53.504 "data_offset": 2048, 00:20:53.504 "data_size": 63488 00:20:53.504 }, 00:20:53.504 { 00:20:53.504 "name": "BaseBdev3", 00:20:53.504 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:53.504 "is_configured": true, 00:20:53.504 "data_offset": 2048, 00:20:53.504 "data_size": 63488 00:20:53.504 }, 00:20:53.504 { 00:20:53.504 "name": "BaseBdev4", 00:20:53.504 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:53.504 "is_configured": true, 00:20:53.504 "data_offset": 2048, 00:20:53.504 "data_size": 63488 00:20:53.504 } 00:20:53.504 ] 00:20:53.504 }' 00:20:53.504 18:24:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.504 18:24:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:54.072 "name": "raid_bdev1", 00:20:54.072 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:54.072 "strip_size_kb": 0, 00:20:54.072 "state": "online", 00:20:54.072 "raid_level": "raid1", 00:20:54.072 "superblock": true, 00:20:54.072 "num_base_bdevs": 4, 00:20:54.072 "num_base_bdevs_discovered": 2, 00:20:54.072 "num_base_bdevs_operational": 2, 00:20:54.072 "base_bdevs_list": [ 00:20:54.072 { 00:20:54.072 "name": null, 00:20:54.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.072 "is_configured": false, 00:20:54.072 "data_offset": 2048, 00:20:54.072 "data_size": 63488 00:20:54.072 }, 00:20:54.072 { 00:20:54.072 "name": null, 00:20:54.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.072 "is_configured": false, 00:20:54.072 "data_offset": 2048, 00:20:54.072 "data_size": 63488 00:20:54.072 }, 00:20:54.072 { 00:20:54.072 "name": "BaseBdev3", 00:20:54.072 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:54.072 "is_configured": true, 00:20:54.072 "data_offset": 2048, 00:20:54.072 "data_size": 63488 00:20:54.072 }, 00:20:54.072 { 00:20:54.072 "name": "BaseBdev4", 00:20:54.072 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:54.072 "is_configured": true, 00:20:54.072 "data_offset": 2048, 00:20:54.072 "data_size": 63488 00:20:54.072 } 00:20:54.072 ] 00:20:54.072 }' 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:54.072 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:54.331 [2024-07-24 18:24:02.785163] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:54.331 [2024-07-24 18:24:02.785271] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:54.331 [2024-07-24 18:24:02.785283] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:54.331 request: 00:20:54.331 { 00:20:54.331 "base_bdev": "BaseBdev1", 00:20:54.331 "raid_bdev": "raid_bdev1", 00:20:54.331 "method": "bdev_raid_add_base_bdev", 00:20:54.331 "req_id": 1 00:20:54.331 } 00:20:54.331 Got JSON-RPC error response 00:20:54.331 response: 00:20:54.331 { 00:20:54.331 "code": -22, 00:20:54.331 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:54.331 } 00:20:54.331 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:20:54.331 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.331 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.331 18:24:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.331 18:24:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:20:55.277 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:55.277 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:55.277 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:55.277 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:55.277 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:55.277 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:55.277 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.277 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.277 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.277 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.277 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.277 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.536 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:55.536 "name": "raid_bdev1", 00:20:55.536 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:55.536 "strip_size_kb": 0, 00:20:55.536 "state": "online", 00:20:55.536 "raid_level": "raid1", 00:20:55.536 "superblock": true, 00:20:55.536 "num_base_bdevs": 4, 00:20:55.536 "num_base_bdevs_discovered": 2, 00:20:55.536 "num_base_bdevs_operational": 2, 00:20:55.536 "base_bdevs_list": [ 00:20:55.536 { 00:20:55.536 "name": null, 00:20:55.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.536 "is_configured": false, 00:20:55.536 "data_offset": 2048, 00:20:55.536 "data_size": 63488 00:20:55.536 }, 00:20:55.536 { 00:20:55.536 "name": null, 00:20:55.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.536 "is_configured": false, 00:20:55.536 "data_offset": 2048, 00:20:55.536 "data_size": 63488 00:20:55.536 }, 00:20:55.536 { 00:20:55.536 "name": "BaseBdev3", 00:20:55.536 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:55.536 "is_configured": true, 00:20:55.536 "data_offset": 2048, 00:20:55.536 "data_size": 63488 00:20:55.536 }, 00:20:55.536 { 00:20:55.536 "name": "BaseBdev4", 00:20:55.536 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:55.536 "is_configured": true, 00:20:55.536 "data_offset": 2048, 00:20:55.536 "data_size": 63488 00:20:55.536 } 00:20:55.536 ] 00:20:55.536 }' 00:20:55.536 18:24:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:55.536 18:24:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.846 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.846 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:55.846 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:55.846 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:55.846 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:56.122 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.122 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.122 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:56.122 "name": "raid_bdev1", 00:20:56.122 "uuid": "a2c978a7-3881-4a0d-a96f-a5411afce28e", 00:20:56.122 "strip_size_kb": 0, 00:20:56.122 "state": "online", 00:20:56.122 "raid_level": "raid1", 00:20:56.122 "superblock": true, 00:20:56.122 "num_base_bdevs": 4, 00:20:56.122 "num_base_bdevs_discovered": 2, 00:20:56.122 "num_base_bdevs_operational": 2, 00:20:56.122 "base_bdevs_list": [ 00:20:56.122 { 00:20:56.122 "name": null, 00:20:56.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.122 "is_configured": false, 00:20:56.122 "data_offset": 2048, 00:20:56.122 "data_size": 63488 00:20:56.122 }, 00:20:56.122 { 00:20:56.122 "name": null, 00:20:56.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.122 "is_configured": false, 00:20:56.122 "data_offset": 2048, 00:20:56.122 "data_size": 63488 00:20:56.122 }, 00:20:56.122 { 00:20:56.122 "name": "BaseBdev3", 00:20:56.122 "uuid": "0a46d1b7-c958-50e3-b209-6f2639411f4c", 00:20:56.122 "is_configured": true, 00:20:56.122 "data_offset": 2048, 00:20:56.122 "data_size": 63488 00:20:56.122 }, 00:20:56.122 { 00:20:56.122 "name": "BaseBdev4", 00:20:56.122 "uuid": "b3502caf-5fd4-57d2-9365-dedec12fbb6f", 00:20:56.122 "is_configured": true, 00:20:56.122 "data_offset": 2048, 00:20:56.122 "data_size": 63488 00:20:56.122 } 00:20:56.122 ] 00:20:56.122 }' 00:20:56.122 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:56.122 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:56.122 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:56.122 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:56.122 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 2277227 00:20:56.122 18:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 2277227 ']' 00:20:56.122 18:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 2277227 00:20:56.122 18:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:20:56.122 18:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:56.122 18:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2277227 00:20:56.380 18:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:56.380 18:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:56.380 18:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2277227' 00:20:56.380 killing process with pid 2277227 00:20:56.380 18:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 2277227 00:20:56.380 Received shutdown signal, test time was about 60.000000 seconds 00:20:56.380 00:20:56.380 Latency(us) 00:20:56.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.380 =================================================================================================================== 00:20:56.380 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.380 [2024-07-24 18:24:04.739820] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.380 [2024-07-24 18:24:04.739897] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.380 [2024-07-24 18:24:04.739938] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.380 [2024-07-24 18:24:04.739946] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1dc0720 name raid_bdev1, state offline 00:20:56.380 18:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 2277227 00:20:56.380 [2024-07-24 18:24:04.778431] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:56.380 18:24:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:20:56.381 00:20:56.381 real 0m30.126s 00:20:56.381 user 0m42.600s 00:20:56.381 sys 0m5.499s 00:20:56.381 18:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:56.381 18:24:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.381 ************************************ 00:20:56.381 END TEST raid_rebuild_test_sb 00:20:56.381 ************************************ 00:20:56.639 18:24:04 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:20:56.639 18:24:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:56.639 18:24:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:56.639 18:24:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:56.639 ************************************ 00:20:56.639 START TEST raid_rebuild_test_io 00:20:56.639 ************************************ 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:20:56.639 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:20:56.640 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:20:56.640 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:20:56.640 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2283013 00:20:56.640 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2283013 /var/tmp/spdk-raid.sock 00:20:56.640 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:56.640 18:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 2283013 ']' 00:20:56.640 18:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:56.640 18:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:56.640 18:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:56.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:56.640 18:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:56.640 18:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:56.640 [2024-07-24 18:24:05.099315] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:20:56.640 [2024-07-24 18:24:05.099356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283013 ] 00:20:56.640 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:56.640 Zero copy mechanism will not be used. 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:01.0 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:01.1 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:01.2 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:01.3 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:01.4 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:01.5 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:01.6 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:01.7 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:02.0 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:02.1 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:02.2 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:02.3 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:02.4 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:02.5 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:02.6 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b3:02.7 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:01.0 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:01.1 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:01.2 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:01.3 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:01.4 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:01.5 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:01.6 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:01.7 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:02.0 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:02.1 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:02.2 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:02.3 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:02.4 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:02.5 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:02.6 cannot be used 00:20:56.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:56.640 EAL: Requested device 0000:b5:02.7 cannot be used 00:20:56.640 [2024-07-24 18:24:05.190758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.899 [2024-07-24 18:24:05.264469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.899 [2024-07-24 18:24:05.313216] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.899 [2024-07-24 18:24:05.313243] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:57.466 18:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:57.466 18:24:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:20:57.466 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:57.466 18:24:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:57.466 BaseBdev1_malloc 00:20:57.466 18:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:57.725 [2024-07-24 18:24:06.213085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:57.725 [2024-07-24 18:24:06.213122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.725 [2024-07-24 18:24:06.213138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2344370 00:20:57.725 [2024-07-24 18:24:06.213146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.725 [2024-07-24 18:24:06.214272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.725 [2024-07-24 18:24:06.214294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:57.725 BaseBdev1 00:20:57.725 18:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:57.725 18:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:57.984 BaseBdev2_malloc 00:20:57.984 18:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:57.984 [2024-07-24 18:24:06.553697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:57.984 [2024-07-24 18:24:06.553730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.984 [2024-07-24 18:24:06.553744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24e7e70 00:20:57.984 [2024-07-24 18:24:06.553752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.984 [2024-07-24 18:24:06.554833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.984 [2024-07-24 18:24:06.554854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:57.984 BaseBdev2 00:20:57.984 18:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:57.984 18:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:58.243 BaseBdev3_malloc 00:20:58.243 18:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:58.501 [2024-07-24 18:24:06.890209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:58.501 [2024-07-24 18:24:06.890243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.501 [2024-07-24 18:24:06.890255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24de160 00:20:58.501 [2024-07-24 18:24:06.890279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.501 [2024-07-24 18:24:06.891293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.501 [2024-07-24 18:24:06.891314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:58.501 BaseBdev3 00:20:58.501 18:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:58.501 18:24:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:58.501 BaseBdev4_malloc 00:20:58.501 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:58.760 [2024-07-24 18:24:07.218669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:58.760 [2024-07-24 18:24:07.218706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.760 [2024-07-24 18:24:07.218720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24dea80 00:20:58.760 [2024-07-24 18:24:07.218728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.760 [2024-07-24 18:24:07.219814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.760 [2024-07-24 18:24:07.219835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:58.760 BaseBdev4 00:20:58.760 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:59.018 spare_malloc 00:20:59.018 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:59.018 spare_delay 00:20:59.018 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:59.277 [2024-07-24 18:24:07.711420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:59.277 [2024-07-24 18:24:07.711454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.277 [2024-07-24 18:24:07.711468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x233db70 00:20:59.277 [2024-07-24 18:24:07.711481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.277 [2024-07-24 18:24:07.712543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.277 [2024-07-24 18:24:07.712564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:59.277 spare 00:20:59.277 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:59.536 [2024-07-24 18:24:07.883890] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:59.536 [2024-07-24 18:24:07.884835] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:59.536 [2024-07-24 18:24:07.884873] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:59.536 [2024-07-24 18:24:07.884903] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:59.536 [2024-07-24 18:24:07.884957] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2340370 00:20:59.536 [2024-07-24 18:24:07.884964] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:59.536 [2024-07-24 18:24:07.885113] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2343140 00:20:59.536 [2024-07-24 18:24:07.885222] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2340370 00:20:59.536 [2024-07-24 18:24:07.885229] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2340370 00:20:59.536 [2024-07-24 18:24:07.885308] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.536 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:59.536 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:59.536 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:59.536 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:59.536 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:59.536 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:59.536 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:59.536 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:59.536 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:59.536 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:59.536 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.536 18:24:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.536 18:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:59.536 "name": "raid_bdev1", 00:20:59.536 "uuid": "9dee9744-42d7-482e-8530-a655bdace817", 00:20:59.536 "strip_size_kb": 0, 00:20:59.536 "state": "online", 00:20:59.536 "raid_level": "raid1", 00:20:59.536 "superblock": false, 00:20:59.536 "num_base_bdevs": 4, 00:20:59.536 "num_base_bdevs_discovered": 4, 00:20:59.536 "num_base_bdevs_operational": 4, 00:20:59.536 "base_bdevs_list": [ 00:20:59.536 { 00:20:59.536 "name": "BaseBdev1", 00:20:59.536 "uuid": "7c3639fe-c106-5ca3-88cd-41f49e0f819e", 00:20:59.536 "is_configured": true, 00:20:59.536 "data_offset": 0, 00:20:59.536 "data_size": 65536 00:20:59.536 }, 00:20:59.536 { 00:20:59.536 "name": "BaseBdev2", 00:20:59.536 "uuid": "942ca756-648d-56bd-9a5a-2881d339f97f", 00:20:59.536 "is_configured": true, 00:20:59.536 "data_offset": 0, 00:20:59.536 "data_size": 65536 00:20:59.536 }, 00:20:59.536 { 00:20:59.536 "name": "BaseBdev3", 00:20:59.536 "uuid": "313aba25-5813-502b-9776-2f037b313922", 00:20:59.536 "is_configured": true, 00:20:59.536 "data_offset": 0, 00:20:59.536 "data_size": 65536 00:20:59.536 }, 00:20:59.536 { 00:20:59.536 "name": "BaseBdev4", 00:20:59.536 "uuid": "dbecc72d-ab93-5d8c-8477-3f2d11a8e9d3", 00:20:59.536 "is_configured": true, 00:20:59.536 "data_offset": 0, 00:20:59.536 "data_size": 65536 00:20:59.536 } 00:20:59.536 ] 00:20:59.536 }' 00:20:59.536 18:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:59.536 18:24:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:00.104 18:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:21:00.104 18:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:00.363 [2024-07-24 18:24:08.706176] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:00.363 18:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:21:00.363 18:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.363 18:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:00.363 18:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:21:00.363 18:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:21:00.363 18:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:00.363 18:24:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:00.622 [2024-07-24 18:24:08.988553] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23436a0 00:21:00.622 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:00.622 Zero copy mechanism will not be used. 00:21:00.622 Running I/O for 60 seconds... 00:21:00.622 [2024-07-24 18:24:09.069886] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:00.622 [2024-07-24 18:24:09.074945] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x23436a0 00:21:00.622 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:00.622 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:00.622 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:00.622 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:00.622 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:00.622 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:00.622 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:00.622 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:00.622 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:00.622 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:00.622 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.622 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.880 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:00.880 "name": "raid_bdev1", 00:21:00.880 "uuid": "9dee9744-42d7-482e-8530-a655bdace817", 00:21:00.880 "strip_size_kb": 0, 00:21:00.880 "state": "online", 00:21:00.880 "raid_level": "raid1", 00:21:00.880 "superblock": false, 00:21:00.880 "num_base_bdevs": 4, 00:21:00.880 "num_base_bdevs_discovered": 3, 00:21:00.880 "num_base_bdevs_operational": 3, 00:21:00.880 "base_bdevs_list": [ 00:21:00.880 { 00:21:00.880 "name": null, 00:21:00.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.880 "is_configured": false, 00:21:00.880 "data_offset": 0, 00:21:00.880 "data_size": 65536 00:21:00.880 }, 00:21:00.880 { 00:21:00.880 "name": "BaseBdev2", 00:21:00.880 "uuid": "942ca756-648d-56bd-9a5a-2881d339f97f", 00:21:00.880 "is_configured": true, 00:21:00.880 "data_offset": 0, 00:21:00.880 "data_size": 65536 00:21:00.880 }, 00:21:00.881 { 00:21:00.881 "name": "BaseBdev3", 00:21:00.881 "uuid": "313aba25-5813-502b-9776-2f037b313922", 00:21:00.881 "is_configured": true, 00:21:00.881 "data_offset": 0, 00:21:00.881 "data_size": 65536 00:21:00.881 }, 00:21:00.881 { 00:21:00.881 "name": "BaseBdev4", 00:21:00.881 "uuid": "dbecc72d-ab93-5d8c-8477-3f2d11a8e9d3", 00:21:00.881 "is_configured": true, 00:21:00.881 "data_offset": 0, 00:21:00.881 "data_size": 65536 00:21:00.881 } 00:21:00.881 ] 00:21:00.881 }' 00:21:00.881 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:00.881 18:24:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.449 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:01.449 [2024-07-24 18:24:09.954110] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:01.449 18:24:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:21:01.449 [2024-07-24 18:24:10.003708] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23db6d0 00:21:01.449 [2024-07-24 18:24:10.005587] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:01.708 [2024-07-24 18:24:10.120685] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:01.708 [2024-07-24 18:24:10.120990] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:01.967 [2024-07-24 18:24:10.336567] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:01.967 [2024-07-24 18:24:10.337007] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:02.225 [2024-07-24 18:24:10.653743] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:02.225 [2024-07-24 18:24:10.768270] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:02.484 [2024-07-24 18:24:11.002047] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:02.484 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.484 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:02.484 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:02.484 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:02.484 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:02.484 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.484 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.743 [2024-07-24 18:24:11.110961] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:02.743 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:02.743 "name": "raid_bdev1", 00:21:02.743 "uuid": "9dee9744-42d7-482e-8530-a655bdace817", 00:21:02.743 "strip_size_kb": 0, 00:21:02.743 "state": "online", 00:21:02.743 "raid_level": "raid1", 00:21:02.743 "superblock": false, 00:21:02.743 "num_base_bdevs": 4, 00:21:02.743 "num_base_bdevs_discovered": 4, 00:21:02.743 "num_base_bdevs_operational": 4, 00:21:02.743 "process": { 00:21:02.743 "type": "rebuild", 00:21:02.743 "target": "spare", 00:21:02.743 "progress": { 00:21:02.743 "blocks": 16384, 00:21:02.743 "percent": 25 00:21:02.743 } 00:21:02.743 }, 00:21:02.743 "base_bdevs_list": [ 00:21:02.743 { 00:21:02.743 "name": "spare", 00:21:02.743 "uuid": "fb6b8715-1630-5192-a7f6-c55e1c824787", 00:21:02.743 "is_configured": true, 00:21:02.743 "data_offset": 0, 00:21:02.743 "data_size": 65536 00:21:02.744 }, 00:21:02.744 { 00:21:02.744 "name": "BaseBdev2", 00:21:02.744 "uuid": "942ca756-648d-56bd-9a5a-2881d339f97f", 00:21:02.744 "is_configured": true, 00:21:02.744 "data_offset": 0, 00:21:02.744 "data_size": 65536 00:21:02.744 }, 00:21:02.744 { 00:21:02.744 "name": "BaseBdev3", 00:21:02.744 "uuid": "313aba25-5813-502b-9776-2f037b313922", 00:21:02.744 "is_configured": true, 00:21:02.744 "data_offset": 0, 00:21:02.744 "data_size": 65536 00:21:02.744 }, 00:21:02.744 { 00:21:02.744 "name": "BaseBdev4", 00:21:02.744 "uuid": "dbecc72d-ab93-5d8c-8477-3f2d11a8e9d3", 00:21:02.744 "is_configured": true, 00:21:02.744 "data_offset": 0, 00:21:02.744 "data_size": 65536 00:21:02.744 } 00:21:02.744 ] 00:21:02.744 }' 00:21:02.744 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:02.744 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.744 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:02.744 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.744 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:03.003 [2024-07-24 18:24:11.418154] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:03.003 [2024-07-24 18:24:11.503210] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:03.003 [2024-07-24 18:24:11.512354] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.003 [2024-07-24 18:24:11.512375] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:03.003 [2024-07-24 18:24:11.512382] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:03.003 [2024-07-24 18:24:11.538774] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x23436a0 00:21:03.003 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:03.003 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:03.003 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:03.003 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:03.003 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:03.003 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:03.003 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.003 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.003 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.003 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.003 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.003 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.262 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.262 "name": "raid_bdev1", 00:21:03.262 "uuid": "9dee9744-42d7-482e-8530-a655bdace817", 00:21:03.262 "strip_size_kb": 0, 00:21:03.262 "state": "online", 00:21:03.262 "raid_level": "raid1", 00:21:03.262 "superblock": false, 00:21:03.262 "num_base_bdevs": 4, 00:21:03.262 "num_base_bdevs_discovered": 3, 00:21:03.262 "num_base_bdevs_operational": 3, 00:21:03.262 "base_bdevs_list": [ 00:21:03.262 { 00:21:03.262 "name": null, 00:21:03.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.262 "is_configured": false, 00:21:03.262 "data_offset": 0, 00:21:03.262 "data_size": 65536 00:21:03.262 }, 00:21:03.262 { 00:21:03.262 "name": "BaseBdev2", 00:21:03.262 "uuid": "942ca756-648d-56bd-9a5a-2881d339f97f", 00:21:03.262 "is_configured": true, 00:21:03.262 "data_offset": 0, 00:21:03.262 "data_size": 65536 00:21:03.262 }, 00:21:03.262 { 00:21:03.262 "name": "BaseBdev3", 00:21:03.262 "uuid": "313aba25-5813-502b-9776-2f037b313922", 00:21:03.262 "is_configured": true, 00:21:03.262 "data_offset": 0, 00:21:03.262 "data_size": 65536 00:21:03.262 }, 00:21:03.262 { 00:21:03.262 "name": "BaseBdev4", 00:21:03.262 "uuid": "dbecc72d-ab93-5d8c-8477-3f2d11a8e9d3", 00:21:03.262 "is_configured": true, 00:21:03.262 "data_offset": 0, 00:21:03.262 "data_size": 65536 00:21:03.262 } 00:21:03.262 ] 00:21:03.262 }' 00:21:03.262 18:24:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.262 18:24:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:03.831 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:03.831 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:03.831 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:03.831 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:03.831 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:03.831 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.831 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.090 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:04.090 "name": "raid_bdev1", 00:21:04.090 "uuid": "9dee9744-42d7-482e-8530-a655bdace817", 00:21:04.090 "strip_size_kb": 0, 00:21:04.090 "state": "online", 00:21:04.090 "raid_level": "raid1", 00:21:04.090 "superblock": false, 00:21:04.090 "num_base_bdevs": 4, 00:21:04.090 "num_base_bdevs_discovered": 3, 00:21:04.090 "num_base_bdevs_operational": 3, 00:21:04.090 "base_bdevs_list": [ 00:21:04.090 { 00:21:04.090 "name": null, 00:21:04.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.090 "is_configured": false, 00:21:04.090 "data_offset": 0, 00:21:04.090 "data_size": 65536 00:21:04.090 }, 00:21:04.090 { 00:21:04.090 "name": "BaseBdev2", 00:21:04.090 "uuid": "942ca756-648d-56bd-9a5a-2881d339f97f", 00:21:04.090 "is_configured": true, 00:21:04.090 "data_offset": 0, 00:21:04.090 "data_size": 65536 00:21:04.090 }, 00:21:04.090 { 00:21:04.090 "name": "BaseBdev3", 00:21:04.090 "uuid": "313aba25-5813-502b-9776-2f037b313922", 00:21:04.090 "is_configured": true, 00:21:04.090 "data_offset": 0, 00:21:04.090 "data_size": 65536 00:21:04.090 }, 00:21:04.090 { 00:21:04.090 "name": "BaseBdev4", 00:21:04.090 "uuid": "dbecc72d-ab93-5d8c-8477-3f2d11a8e9d3", 00:21:04.090 "is_configured": true, 00:21:04.090 "data_offset": 0, 00:21:04.090 "data_size": 65536 00:21:04.090 } 00:21:04.090 ] 00:21:04.090 }' 00:21:04.090 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:04.090 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:04.090 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:04.090 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:04.090 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:04.350 [2024-07-24 18:24:12.699114] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.350 18:24:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:04.350 [2024-07-24 18:24:12.743665] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x233c250 00:21:04.350 [2024-07-24 18:24:12.744757] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:04.350 [2024-07-24 18:24:12.877435] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:04.609 [2024-07-24 18:24:13.094242] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:04.609 [2024-07-24 18:24:13.094803] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:04.868 [2024-07-24 18:24:13.421629] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:04.868 [2024-07-24 18:24:13.421872] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:05.126 [2024-07-24 18:24:13.536250] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:05.385 18:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.385 18:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:05.385 18:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:05.385 18:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:05.385 18:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:05.385 18:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.385 18:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.385 [2024-07-24 18:24:13.860329] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:05.385 18:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:05.385 "name": "raid_bdev1", 00:21:05.385 "uuid": "9dee9744-42d7-482e-8530-a655bdace817", 00:21:05.385 "strip_size_kb": 0, 00:21:05.385 "state": "online", 00:21:05.385 "raid_level": "raid1", 00:21:05.385 "superblock": false, 00:21:05.385 "num_base_bdevs": 4, 00:21:05.385 "num_base_bdevs_discovered": 4, 00:21:05.385 "num_base_bdevs_operational": 4, 00:21:05.385 "process": { 00:21:05.385 "type": "rebuild", 00:21:05.385 "target": "spare", 00:21:05.385 "progress": { 00:21:05.385 "blocks": 14336, 00:21:05.385 "percent": 21 00:21:05.385 } 00:21:05.385 }, 00:21:05.385 "base_bdevs_list": [ 00:21:05.385 { 00:21:05.385 "name": "spare", 00:21:05.385 "uuid": "fb6b8715-1630-5192-a7f6-c55e1c824787", 00:21:05.385 "is_configured": true, 00:21:05.385 "data_offset": 0, 00:21:05.385 "data_size": 65536 00:21:05.385 }, 00:21:05.385 { 00:21:05.385 "name": "BaseBdev2", 00:21:05.385 "uuid": "942ca756-648d-56bd-9a5a-2881d339f97f", 00:21:05.385 "is_configured": true, 00:21:05.385 "data_offset": 0, 00:21:05.385 "data_size": 65536 00:21:05.385 }, 00:21:05.385 { 00:21:05.385 "name": "BaseBdev3", 00:21:05.385 "uuid": "313aba25-5813-502b-9776-2f037b313922", 00:21:05.385 "is_configured": true, 00:21:05.385 "data_offset": 0, 00:21:05.385 "data_size": 65536 00:21:05.385 }, 00:21:05.385 { 00:21:05.385 "name": "BaseBdev4", 00:21:05.385 "uuid": "dbecc72d-ab93-5d8c-8477-3f2d11a8e9d3", 00:21:05.385 "is_configured": true, 00:21:05.385 "data_offset": 0, 00:21:05.385 "data_size": 65536 00:21:05.385 } 00:21:05.385 ] 00:21:05.385 }' 00:21:05.385 18:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:05.385 18:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:05.385 18:24:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:05.644 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:05.644 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:21:05.644 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:21:05.644 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:21:05.644 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:21:05.644 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:05.644 [2024-07-24 18:24:14.077029] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:05.644 [2024-07-24 18:24:14.077268] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:05.644 [2024-07-24 18:24:14.172761] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:05.902 [2024-07-24 18:24:14.292328] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x23436a0 00:21:05.902 [2024-07-24 18:24:14.292347] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x233c250 00:21:05.902 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:21:05.902 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:21:05.902 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.902 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:05.902 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:05.902 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:05.902 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:05.902 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.902 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.902 [2024-07-24 18:24:14.414436] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:05.902 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:05.902 "name": "raid_bdev1", 00:21:05.902 "uuid": "9dee9744-42d7-482e-8530-a655bdace817", 00:21:05.902 "strip_size_kb": 0, 00:21:05.902 "state": "online", 00:21:05.902 "raid_level": "raid1", 00:21:05.902 "superblock": false, 00:21:05.902 "num_base_bdevs": 4, 00:21:05.902 "num_base_bdevs_discovered": 3, 00:21:05.902 "num_base_bdevs_operational": 3, 00:21:05.902 "process": { 00:21:05.902 "type": "rebuild", 00:21:05.902 "target": "spare", 00:21:05.902 "progress": { 00:21:05.902 "blocks": 20480, 00:21:05.902 "percent": 31 00:21:05.902 } 00:21:05.902 }, 00:21:05.902 "base_bdevs_list": [ 00:21:05.902 { 00:21:05.902 "name": "spare", 00:21:05.902 "uuid": "fb6b8715-1630-5192-a7f6-c55e1c824787", 00:21:05.902 "is_configured": true, 00:21:05.902 "data_offset": 0, 00:21:05.902 "data_size": 65536 00:21:05.902 }, 00:21:05.902 { 00:21:05.902 "name": null, 00:21:05.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.902 "is_configured": false, 00:21:05.902 "data_offset": 0, 00:21:05.902 "data_size": 65536 00:21:05.902 }, 00:21:05.902 { 00:21:05.902 "name": "BaseBdev3", 00:21:05.902 "uuid": "313aba25-5813-502b-9776-2f037b313922", 00:21:05.902 "is_configured": true, 00:21:05.902 "data_offset": 0, 00:21:05.902 "data_size": 65536 00:21:05.902 }, 00:21:05.902 { 00:21:05.902 "name": "BaseBdev4", 00:21:05.902 "uuid": "dbecc72d-ab93-5d8c-8477-3f2d11a8e9d3", 00:21:05.902 "is_configured": true, 00:21:05.902 "data_offset": 0, 00:21:05.902 "data_size": 65536 00:21:05.902 } 00:21:05.902 ] 00:21:05.902 }' 00:21:05.902 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=718 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.160 [2024-07-24 18:24:14.621930] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:06.160 [2024-07-24 18:24:14.622303] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:06.160 "name": "raid_bdev1", 00:21:06.160 "uuid": "9dee9744-42d7-482e-8530-a655bdace817", 00:21:06.160 "strip_size_kb": 0, 00:21:06.160 "state": "online", 00:21:06.160 "raid_level": "raid1", 00:21:06.160 "superblock": false, 00:21:06.160 "num_base_bdevs": 4, 00:21:06.160 "num_base_bdevs_discovered": 3, 00:21:06.160 "num_base_bdevs_operational": 3, 00:21:06.160 "process": { 00:21:06.160 "type": "rebuild", 00:21:06.160 "target": "spare", 00:21:06.160 "progress": { 00:21:06.160 "blocks": 22528, 00:21:06.160 "percent": 34 00:21:06.160 } 00:21:06.160 }, 00:21:06.160 "base_bdevs_list": [ 00:21:06.160 { 00:21:06.160 "name": "spare", 00:21:06.160 "uuid": "fb6b8715-1630-5192-a7f6-c55e1c824787", 00:21:06.160 "is_configured": true, 00:21:06.160 "data_offset": 0, 00:21:06.160 "data_size": 65536 00:21:06.160 }, 00:21:06.160 { 00:21:06.160 "name": null, 00:21:06.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.160 "is_configured": false, 00:21:06.160 "data_offset": 0, 00:21:06.160 "data_size": 65536 00:21:06.160 }, 00:21:06.160 { 00:21:06.160 "name": "BaseBdev3", 00:21:06.160 "uuid": "313aba25-5813-502b-9776-2f037b313922", 00:21:06.160 "is_configured": true, 00:21:06.160 "data_offset": 0, 00:21:06.160 "data_size": 65536 00:21:06.160 }, 00:21:06.160 { 00:21:06.160 "name": "BaseBdev4", 00:21:06.160 "uuid": "dbecc72d-ab93-5d8c-8477-3f2d11a8e9d3", 00:21:06.160 "is_configured": true, 00:21:06.160 "data_offset": 0, 00:21:06.160 "data_size": 65536 00:21:06.160 } 00:21:06.160 ] 00:21:06.160 }' 00:21:06.160 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:06.419 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.419 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:06.419 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.419 18:24:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:06.419 [2024-07-24 18:24:14.964053] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:06.678 [2024-07-24 18:24:15.088032] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:06.938 [2024-07-24 18:24:15.420979] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:07.505 18:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:07.506 18:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:07.506 18:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:07.506 18:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:07.506 18:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:07.506 18:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:07.506 18:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.506 18:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.506 18:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:07.506 "name": "raid_bdev1", 00:21:07.506 "uuid": "9dee9744-42d7-482e-8530-a655bdace817", 00:21:07.506 "strip_size_kb": 0, 00:21:07.506 "state": "online", 00:21:07.506 "raid_level": "raid1", 00:21:07.506 "superblock": false, 00:21:07.506 "num_base_bdevs": 4, 00:21:07.506 "num_base_bdevs_discovered": 3, 00:21:07.506 "num_base_bdevs_operational": 3, 00:21:07.506 "process": { 00:21:07.506 "type": "rebuild", 00:21:07.506 "target": "spare", 00:21:07.506 "progress": { 00:21:07.506 "blocks": 38912, 00:21:07.506 "percent": 59 00:21:07.506 } 00:21:07.506 }, 00:21:07.506 "base_bdevs_list": [ 00:21:07.506 { 00:21:07.506 "name": "spare", 00:21:07.506 "uuid": "fb6b8715-1630-5192-a7f6-c55e1c824787", 00:21:07.506 "is_configured": true, 00:21:07.506 "data_offset": 0, 00:21:07.506 "data_size": 65536 00:21:07.506 }, 00:21:07.506 { 00:21:07.506 "name": null, 00:21:07.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.506 "is_configured": false, 00:21:07.506 "data_offset": 0, 00:21:07.506 "data_size": 65536 00:21:07.506 }, 00:21:07.506 { 00:21:07.506 "name": "BaseBdev3", 00:21:07.506 "uuid": "313aba25-5813-502b-9776-2f037b313922", 00:21:07.506 "is_configured": true, 00:21:07.506 "data_offset": 0, 00:21:07.506 "data_size": 65536 00:21:07.506 }, 00:21:07.506 { 00:21:07.506 "name": "BaseBdev4", 00:21:07.506 "uuid": "dbecc72d-ab93-5d8c-8477-3f2d11a8e9d3", 00:21:07.506 "is_configured": true, 00:21:07.506 "data_offset": 0, 00:21:07.506 "data_size": 65536 00:21:07.506 } 00:21:07.506 ] 00:21:07.506 }' 00:21:07.506 18:24:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:07.506 18:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.506 18:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:07.506 18:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.506 18:24:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:08.444 [2024-07-24 18:24:16.676291] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:08.444 [2024-07-24 18:24:16.989513] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:08.444 [2024-07-24 18:24:16.990026] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:08.702 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:08.703 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:08.703 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:08.703 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:08.703 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:08.703 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:08.703 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.703 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.703 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:08.703 "name": "raid_bdev1", 00:21:08.703 "uuid": "9dee9744-42d7-482e-8530-a655bdace817", 00:21:08.703 "strip_size_kb": 0, 00:21:08.703 "state": "online", 00:21:08.703 "raid_level": "raid1", 00:21:08.703 "superblock": false, 00:21:08.703 "num_base_bdevs": 4, 00:21:08.703 "num_base_bdevs_discovered": 3, 00:21:08.703 "num_base_bdevs_operational": 3, 00:21:08.703 "process": { 00:21:08.703 "type": "rebuild", 00:21:08.703 "target": "spare", 00:21:08.703 "progress": { 00:21:08.703 "blocks": 57344, 00:21:08.703 "percent": 87 00:21:08.703 } 00:21:08.703 }, 00:21:08.703 "base_bdevs_list": [ 00:21:08.703 { 00:21:08.703 "name": "spare", 00:21:08.703 "uuid": "fb6b8715-1630-5192-a7f6-c55e1c824787", 00:21:08.703 "is_configured": true, 00:21:08.703 "data_offset": 0, 00:21:08.703 "data_size": 65536 00:21:08.703 }, 00:21:08.703 { 00:21:08.703 "name": null, 00:21:08.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.703 "is_configured": false, 00:21:08.703 "data_offset": 0, 00:21:08.703 "data_size": 65536 00:21:08.703 }, 00:21:08.703 { 00:21:08.703 "name": "BaseBdev3", 00:21:08.703 "uuid": "313aba25-5813-502b-9776-2f037b313922", 00:21:08.703 "is_configured": true, 00:21:08.703 "data_offset": 0, 00:21:08.703 "data_size": 65536 00:21:08.703 }, 00:21:08.703 { 00:21:08.703 "name": "BaseBdev4", 00:21:08.703 "uuid": "dbecc72d-ab93-5d8c-8477-3f2d11a8e9d3", 00:21:08.703 "is_configured": true, 00:21:08.703 "data_offset": 0, 00:21:08.703 "data_size": 65536 00:21:08.703 } 00:21:08.703 ] 00:21:08.703 }' 00:21:08.703 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:08.703 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:08.703 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:08.962 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:08.962 18:24:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:08.962 [2024-07-24 18:24:17.539195] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:09.221 [2024-07-24 18:24:17.639512] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:09.221 [2024-07-24 18:24:17.646274] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.790 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:09.790 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.790 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:09.790 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:09.790 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:09.790 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:09.790 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.790 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.112 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:10.112 "name": "raid_bdev1", 00:21:10.112 "uuid": "9dee9744-42d7-482e-8530-a655bdace817", 00:21:10.112 "strip_size_kb": 0, 00:21:10.112 "state": "online", 00:21:10.112 "raid_level": "raid1", 00:21:10.112 "superblock": false, 00:21:10.112 "num_base_bdevs": 4, 00:21:10.112 "num_base_bdevs_discovered": 3, 00:21:10.112 "num_base_bdevs_operational": 3, 00:21:10.112 "base_bdevs_list": [ 00:21:10.112 { 00:21:10.112 "name": "spare", 00:21:10.112 "uuid": "fb6b8715-1630-5192-a7f6-c55e1c824787", 00:21:10.112 "is_configured": true, 00:21:10.112 "data_offset": 0, 00:21:10.112 "data_size": 65536 00:21:10.112 }, 00:21:10.112 { 00:21:10.112 "name": null, 00:21:10.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.112 "is_configured": false, 00:21:10.112 "data_offset": 0, 00:21:10.112 "data_size": 65536 00:21:10.112 }, 00:21:10.112 { 00:21:10.112 "name": "BaseBdev3", 00:21:10.112 "uuid": "313aba25-5813-502b-9776-2f037b313922", 00:21:10.112 "is_configured": true, 00:21:10.112 "data_offset": 0, 00:21:10.112 "data_size": 65536 00:21:10.112 }, 00:21:10.112 { 00:21:10.112 "name": "BaseBdev4", 00:21:10.112 "uuid": "dbecc72d-ab93-5d8c-8477-3f2d11a8e9d3", 00:21:10.112 "is_configured": true, 00:21:10.112 "data_offset": 0, 00:21:10.112 "data_size": 65536 00:21:10.112 } 00:21:10.112 ] 00:21:10.112 }' 00:21:10.112 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:10.112 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:10.112 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:10.112 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:10.112 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:21:10.112 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.112 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:10.112 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:10.112 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:10.112 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:10.112 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.112 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:10.371 "name": "raid_bdev1", 00:21:10.371 "uuid": "9dee9744-42d7-482e-8530-a655bdace817", 00:21:10.371 "strip_size_kb": 0, 00:21:10.371 "state": "online", 00:21:10.371 "raid_level": "raid1", 00:21:10.371 "superblock": false, 00:21:10.371 "num_base_bdevs": 4, 00:21:10.371 "num_base_bdevs_discovered": 3, 00:21:10.371 "num_base_bdevs_operational": 3, 00:21:10.371 "base_bdevs_list": [ 00:21:10.371 { 00:21:10.371 "name": "spare", 00:21:10.371 "uuid": "fb6b8715-1630-5192-a7f6-c55e1c824787", 00:21:10.371 "is_configured": true, 00:21:10.371 "data_offset": 0, 00:21:10.371 "data_size": 65536 00:21:10.371 }, 00:21:10.371 { 00:21:10.371 "name": null, 00:21:10.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.371 "is_configured": false, 00:21:10.371 "data_offset": 0, 00:21:10.371 "data_size": 65536 00:21:10.371 }, 00:21:10.371 { 00:21:10.371 "name": "BaseBdev3", 00:21:10.371 "uuid": "313aba25-5813-502b-9776-2f037b313922", 00:21:10.371 "is_configured": true, 00:21:10.371 "data_offset": 0, 00:21:10.371 "data_size": 65536 00:21:10.371 }, 00:21:10.371 { 00:21:10.371 "name": "BaseBdev4", 00:21:10.371 "uuid": "dbecc72d-ab93-5d8c-8477-3f2d11a8e9d3", 00:21:10.371 "is_configured": true, 00:21:10.371 "data_offset": 0, 00:21:10.371 "data_size": 65536 00:21:10.371 } 00:21:10.371 ] 00:21:10.371 }' 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.371 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.631 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.631 "name": "raid_bdev1", 00:21:10.631 "uuid": "9dee9744-42d7-482e-8530-a655bdace817", 00:21:10.631 "strip_size_kb": 0, 00:21:10.631 "state": "online", 00:21:10.631 "raid_level": "raid1", 00:21:10.631 "superblock": false, 00:21:10.631 "num_base_bdevs": 4, 00:21:10.631 "num_base_bdevs_discovered": 3, 00:21:10.631 "num_base_bdevs_operational": 3, 00:21:10.631 "base_bdevs_list": [ 00:21:10.631 { 00:21:10.631 "name": "spare", 00:21:10.631 "uuid": "fb6b8715-1630-5192-a7f6-c55e1c824787", 00:21:10.631 "is_configured": true, 00:21:10.631 "data_offset": 0, 00:21:10.631 "data_size": 65536 00:21:10.631 }, 00:21:10.631 { 00:21:10.631 "name": null, 00:21:10.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.631 "is_configured": false, 00:21:10.631 "data_offset": 0, 00:21:10.631 "data_size": 65536 00:21:10.631 }, 00:21:10.631 { 00:21:10.631 "name": "BaseBdev3", 00:21:10.631 "uuid": "313aba25-5813-502b-9776-2f037b313922", 00:21:10.631 "is_configured": true, 00:21:10.631 "data_offset": 0, 00:21:10.631 "data_size": 65536 00:21:10.631 }, 00:21:10.631 { 00:21:10.631 "name": "BaseBdev4", 00:21:10.631 "uuid": "dbecc72d-ab93-5d8c-8477-3f2d11a8e9d3", 00:21:10.631 "is_configured": true, 00:21:10.631 "data_offset": 0, 00:21:10.631 "data_size": 65536 00:21:10.631 } 00:21:10.631 ] 00:21:10.631 }' 00:21:10.631 18:24:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.631 18:24:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:10.891 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:11.150 [2024-07-24 18:24:19.634671] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:11.150 [2024-07-24 18:24:19.634695] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:11.150 00:21:11.150 Latency(us) 00:21:11.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.150 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:11.150 raid_bdev1 : 10.67 107.21 321.63 0.00 0.00 13092.06 245.76 112407.35 00:21:11.150 =================================================================================================================== 00:21:11.150 Total : 107.21 321.63 0.00 0.00 13092.06 245.76 112407.35 00:21:11.150 [2024-07-24 18:24:19.689427] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.150 [2024-07-24 18:24:19.689447] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.150 [2024-07-24 18:24:19.689507] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.150 [2024-07-24 18:24:19.689515] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2340370 name raid_bdev1, state offline 00:21:11.150 0 00:21:11.150 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.150 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:21:11.409 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:21:11.409 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:21:11.409 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:11.409 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:11.409 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:11.409 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:11.409 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:11.410 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:11.410 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:11.410 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:11.410 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:11.410 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:11.410 18:24:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:11.670 /dev/nbd0 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:11.670 1+0 records in 00:21:11.670 1+0 records out 00:21:11.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255058 s, 16.1 MB/s 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:11.670 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:11.670 /dev/nbd1 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:11.929 1+0 records in 00:21:11.929 1+0 records out 00:21:11.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221416 s, 18.5 MB/s 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:11.929 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:11.930 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:12.189 /dev/nbd1 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:12.189 1+0 records in 00:21:12.189 1+0 records out 00:21:12.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276774 s, 14.8 MB/s 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:12.189 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:12.449 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:12.449 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:12.449 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:12.449 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:12.449 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:12.449 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:12.449 18:24:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:12.449 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:12.449 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:12.449 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:12.449 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:12.449 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:12.449 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:12.449 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:12.450 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:12.450 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:12.450 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:12.450 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:12.450 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:12.450 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:21:12.450 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:12.450 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 2283013 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 2283013 ']' 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 2283013 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2283013 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2283013' 00:21:12.709 killing process with pid 2283013 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 2283013 00:21:12.709 Received shutdown signal, test time was about 12.253702 seconds 00:21:12.709 00:21:12.709 Latency(us) 00:21:12.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.709 =================================================================================================================== 00:21:12.709 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.709 [2024-07-24 18:24:21.273832] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:12.709 18:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 2283013 00:21:12.968 [2024-07-24 18:24:21.306136] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:12.968 18:24:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:21:12.968 00:21:12.968 real 0m16.443s 00:21:12.968 user 0m24.243s 00:21:12.968 sys 0m2.765s 00:21:12.968 18:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:12.968 18:24:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:12.968 ************************************ 00:21:12.968 END TEST raid_rebuild_test_io 00:21:12.968 ************************************ 00:21:12.968 18:24:21 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:21:12.968 18:24:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:12.968 18:24:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:12.968 18:24:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:12.968 ************************************ 00:21:12.968 START TEST raid_rebuild_test_sb_io 00:21:12.968 ************************************ 00:21:12.968 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:21:12.968 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:21:12.968 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:21:12.968 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:21:12.968 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:21:12.968 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:21:12.968 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:21:12.968 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:12.968 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:21:12.968 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=2286373 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 2286373 /var/tmp/spdk-raid.sock 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 2286373 ']' 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:12.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:12.969 18:24:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:13.228 [2024-07-24 18:24:21.596100] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:21:13.228 [2024-07-24 18:24:21.596141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286373 ] 00:21:13.228 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:13.228 Zero copy mechanism will not be used. 00:21:13.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.228 EAL: Requested device 0000:b3:01.0 cannot be used 00:21:13.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.228 EAL: Requested device 0000:b3:01.1 cannot be used 00:21:13.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.228 EAL: Requested device 0000:b3:01.2 cannot be used 00:21:13.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.228 EAL: Requested device 0000:b3:01.3 cannot be used 00:21:13.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.228 EAL: Requested device 0000:b3:01.4 cannot be used 00:21:13.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.228 EAL: Requested device 0000:b3:01.5 cannot be used 00:21:13.228 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.228 EAL: Requested device 0000:b3:01.6 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b3:01.7 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b3:02.0 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b3:02.1 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b3:02.2 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b3:02.3 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b3:02.4 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b3:02.5 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b3:02.6 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b3:02.7 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:01.0 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:01.1 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:01.2 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:01.3 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:01.4 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:01.5 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:01.6 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:01.7 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:02.0 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:02.1 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:02.2 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:02.3 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:02.4 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:02.5 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:02.6 cannot be used 00:21:13.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:13.229 EAL: Requested device 0000:b5:02.7 cannot be used 00:21:13.229 [2024-07-24 18:24:21.687543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.229 [2024-07-24 18:24:21.760389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.229 [2024-07-24 18:24:21.810849] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.229 [2024-07-24 18:24:21.810876] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.798 18:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.798 18:24:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:21:13.798 18:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:13.798 18:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:14.057 BaseBdev1_malloc 00:21:14.057 18:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:14.317 [2024-07-24 18:24:22.686724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:14.317 [2024-07-24 18:24:22.686760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.317 [2024-07-24 18:24:22.686776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ace370 00:21:14.317 [2024-07-24 18:24:22.686785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.317 [2024-07-24 18:24:22.687902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.317 [2024-07-24 18:24:22.687924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:14.317 BaseBdev1 00:21:14.317 18:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:14.317 18:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:14.317 BaseBdev2_malloc 00:21:14.317 18:24:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:14.576 [2024-07-24 18:24:23.015370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:14.576 [2024-07-24 18:24:23.015402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.576 [2024-07-24 18:24:23.015414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c71e70 00:21:14.576 [2024-07-24 18:24:23.015438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.576 [2024-07-24 18:24:23.016435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.576 [2024-07-24 18:24:23.016457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:14.576 BaseBdev2 00:21:14.577 18:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:14.577 18:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:14.836 BaseBdev3_malloc 00:21:14.836 18:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:14.836 [2024-07-24 18:24:23.347784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:14.836 [2024-07-24 18:24:23.347818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.836 [2024-07-24 18:24:23.347830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c68160 00:21:14.837 [2024-07-24 18:24:23.347856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.837 [2024-07-24 18:24:23.348854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.837 [2024-07-24 18:24:23.348875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:14.837 BaseBdev3 00:21:14.837 18:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:14.837 18:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:15.094 BaseBdev4_malloc 00:21:15.094 18:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:15.095 [2024-07-24 18:24:23.676240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:15.095 [2024-07-24 18:24:23.676274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.095 [2024-07-24 18:24:23.676288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c68a80 00:21:15.095 [2024-07-24 18:24:23.676312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.095 [2024-07-24 18:24:23.677297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.095 [2024-07-24 18:24:23.677317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:15.095 BaseBdev4 00:21:15.095 18:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:15.352 spare_malloc 00:21:15.353 18:24:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:15.611 spare_delay 00:21:15.611 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:15.611 [2024-07-24 18:24:24.165034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:15.611 [2024-07-24 18:24:24.165068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.611 [2024-07-24 18:24:24.165083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ac7b70 00:21:15.611 [2024-07-24 18:24:24.165091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.611 [2024-07-24 18:24:24.166119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.611 [2024-07-24 18:24:24.166139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:15.611 spare 00:21:15.611 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:15.869 [2024-07-24 18:24:24.333501] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:15.869 [2024-07-24 18:24:24.334340] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:15.869 [2024-07-24 18:24:24.334381] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:15.869 [2024-07-24 18:24:24.334409] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:15.869 [2024-07-24 18:24:24.334535] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1aca370 00:21:15.869 [2024-07-24 18:24:24.334542] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:15.869 [2024-07-24 18:24:24.334679] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1aca340 00:21:15.869 [2024-07-24 18:24:24.334775] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1aca370 00:21:15.869 [2024-07-24 18:24:24.334781] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1aca370 00:21:15.869 [2024-07-24 18:24:24.334841] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.869 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:15.869 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:15.869 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:15.869 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:15.870 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:15.870 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:15.870 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:15.870 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:15.870 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:15.870 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:15.870 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.870 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.128 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:16.128 "name": "raid_bdev1", 00:21:16.128 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:16.128 "strip_size_kb": 0, 00:21:16.128 "state": "online", 00:21:16.128 "raid_level": "raid1", 00:21:16.128 "superblock": true, 00:21:16.128 "num_base_bdevs": 4, 00:21:16.128 "num_base_bdevs_discovered": 4, 00:21:16.128 "num_base_bdevs_operational": 4, 00:21:16.128 "base_bdevs_list": [ 00:21:16.128 { 00:21:16.128 "name": "BaseBdev1", 00:21:16.128 "uuid": "12b4238d-654b-5fb1-a020-5ba2e26ec816", 00:21:16.128 "is_configured": true, 00:21:16.128 "data_offset": 2048, 00:21:16.128 "data_size": 63488 00:21:16.128 }, 00:21:16.128 { 00:21:16.128 "name": "BaseBdev2", 00:21:16.128 "uuid": "f1cb1959-4b97-5d48-b6ca-c1431dcf7ea6", 00:21:16.128 "is_configured": true, 00:21:16.128 "data_offset": 2048, 00:21:16.128 "data_size": 63488 00:21:16.128 }, 00:21:16.128 { 00:21:16.128 "name": "BaseBdev3", 00:21:16.128 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:16.128 "is_configured": true, 00:21:16.128 "data_offset": 2048, 00:21:16.128 "data_size": 63488 00:21:16.128 }, 00:21:16.128 { 00:21:16.128 "name": "BaseBdev4", 00:21:16.128 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:16.128 "is_configured": true, 00:21:16.128 "data_offset": 2048, 00:21:16.128 "data_size": 63488 00:21:16.128 } 00:21:16.129 ] 00:21:16.129 }' 00:21:16.129 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:16.129 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:16.697 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:16.697 18:24:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:21:16.697 [2024-07-24 18:24:25.155806] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:16.697 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:21:16.697 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.697 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:16.956 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:21:16.956 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:21:16.956 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:16.956 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:16.956 [2024-07-24 18:24:25.446218] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c671d0 00:21:16.956 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:16.956 Zero copy mechanism will not be used. 00:21:16.956 Running I/O for 60 seconds... 00:21:16.956 [2024-07-24 18:24:25.526915] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:16.956 [2024-07-24 18:24:25.532942] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1c671d0 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:17.215 "name": "raid_bdev1", 00:21:17.215 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:17.215 "strip_size_kb": 0, 00:21:17.215 "state": "online", 00:21:17.215 "raid_level": "raid1", 00:21:17.215 "superblock": true, 00:21:17.215 "num_base_bdevs": 4, 00:21:17.215 "num_base_bdevs_discovered": 3, 00:21:17.215 "num_base_bdevs_operational": 3, 00:21:17.215 "base_bdevs_list": [ 00:21:17.215 { 00:21:17.215 "name": null, 00:21:17.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.215 "is_configured": false, 00:21:17.215 "data_offset": 2048, 00:21:17.215 "data_size": 63488 00:21:17.215 }, 00:21:17.215 { 00:21:17.215 "name": "BaseBdev2", 00:21:17.215 "uuid": "f1cb1959-4b97-5d48-b6ca-c1431dcf7ea6", 00:21:17.215 "is_configured": true, 00:21:17.215 "data_offset": 2048, 00:21:17.215 "data_size": 63488 00:21:17.215 }, 00:21:17.215 { 00:21:17.215 "name": "BaseBdev3", 00:21:17.215 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:17.215 "is_configured": true, 00:21:17.215 "data_offset": 2048, 00:21:17.215 "data_size": 63488 00:21:17.215 }, 00:21:17.215 { 00:21:17.215 "name": "BaseBdev4", 00:21:17.215 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:17.215 "is_configured": true, 00:21:17.215 "data_offset": 2048, 00:21:17.215 "data_size": 63488 00:21:17.215 } 00:21:17.215 ] 00:21:17.215 }' 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:17.215 18:24:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:17.783 18:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:18.043 [2024-07-24 18:24:26.417311] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:18.043 [2024-07-24 18:24:26.454530] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c65790 00:21:18.043 [2024-07-24 18:24:26.456216] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:18.043 18:24:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:21:18.043 [2024-07-24 18:24:26.595393] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:18.301 [2024-07-24 18:24:26.746684] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:18.302 [2024-07-24 18:24:26.747225] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:18.560 [2024-07-24 18:24:27.104555] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:18.819 [2024-07-24 18:24:27.322218] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:18.819 [2024-07-24 18:24:27.322368] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:19.079 18:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.079 18:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:19.079 18:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:19.079 18:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:19.079 18:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:19.079 18:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.079 18:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.079 [2024-07-24 18:24:27.561576] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:19.079 [2024-07-24 18:24:27.562603] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:19.079 18:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:19.079 "name": "raid_bdev1", 00:21:19.079 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:19.079 "strip_size_kb": 0, 00:21:19.079 "state": "online", 00:21:19.079 "raid_level": "raid1", 00:21:19.079 "superblock": true, 00:21:19.079 "num_base_bdevs": 4, 00:21:19.079 "num_base_bdevs_discovered": 4, 00:21:19.079 "num_base_bdevs_operational": 4, 00:21:19.079 "process": { 00:21:19.079 "type": "rebuild", 00:21:19.079 "target": "spare", 00:21:19.079 "progress": { 00:21:19.079 "blocks": 14336, 00:21:19.079 "percent": 22 00:21:19.079 } 00:21:19.079 }, 00:21:19.079 "base_bdevs_list": [ 00:21:19.079 { 00:21:19.079 "name": "spare", 00:21:19.079 "uuid": "a99fb58f-df27-507d-945d-7d5dacf06cf5", 00:21:19.079 "is_configured": true, 00:21:19.079 "data_offset": 2048, 00:21:19.079 "data_size": 63488 00:21:19.079 }, 00:21:19.079 { 00:21:19.079 "name": "BaseBdev2", 00:21:19.079 "uuid": "f1cb1959-4b97-5d48-b6ca-c1431dcf7ea6", 00:21:19.079 "is_configured": true, 00:21:19.079 "data_offset": 2048, 00:21:19.079 "data_size": 63488 00:21:19.079 }, 00:21:19.079 { 00:21:19.079 "name": "BaseBdev3", 00:21:19.079 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:19.079 "is_configured": true, 00:21:19.079 "data_offset": 2048, 00:21:19.079 "data_size": 63488 00:21:19.079 }, 00:21:19.079 { 00:21:19.079 "name": "BaseBdev4", 00:21:19.079 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:19.079 "is_configured": true, 00:21:19.079 "data_offset": 2048, 00:21:19.079 "data_size": 63488 00:21:19.079 } 00:21:19.079 ] 00:21:19.079 }' 00:21:19.079 18:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:19.338 18:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.338 18:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:19.338 18:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.338 18:24:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:19.338 [2024-07-24 18:24:27.764518] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:19.338 [2024-07-24 18:24:27.765010] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:19.338 [2024-07-24 18:24:27.880743] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:19.338 [2024-07-24 18:24:27.881137] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:19.596 [2024-07-24 18:24:27.983125] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:19.596 [2024-07-24 18:24:27.992790] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.596 [2024-07-24 18:24:27.992811] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:19.596 [2024-07-24 18:24:27.992818] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:19.596 [2024-07-24 18:24:28.013266] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1c671d0 00:21:19.596 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:19.596 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:19.596 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:19.596 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:19.596 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:19.596 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:19.596 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.596 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.596 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.596 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.596 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.596 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.855 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.855 "name": "raid_bdev1", 00:21:19.855 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:19.855 "strip_size_kb": 0, 00:21:19.855 "state": "online", 00:21:19.855 "raid_level": "raid1", 00:21:19.855 "superblock": true, 00:21:19.855 "num_base_bdevs": 4, 00:21:19.855 "num_base_bdevs_discovered": 3, 00:21:19.855 "num_base_bdevs_operational": 3, 00:21:19.855 "base_bdevs_list": [ 00:21:19.855 { 00:21:19.855 "name": null, 00:21:19.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.855 "is_configured": false, 00:21:19.855 "data_offset": 2048, 00:21:19.855 "data_size": 63488 00:21:19.855 }, 00:21:19.855 { 00:21:19.855 "name": "BaseBdev2", 00:21:19.855 "uuid": "f1cb1959-4b97-5d48-b6ca-c1431dcf7ea6", 00:21:19.855 "is_configured": true, 00:21:19.855 "data_offset": 2048, 00:21:19.855 "data_size": 63488 00:21:19.855 }, 00:21:19.855 { 00:21:19.855 "name": "BaseBdev3", 00:21:19.855 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:19.855 "is_configured": true, 00:21:19.855 "data_offset": 2048, 00:21:19.855 "data_size": 63488 00:21:19.855 }, 00:21:19.855 { 00:21:19.855 "name": "BaseBdev4", 00:21:19.855 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:19.855 "is_configured": true, 00:21:19.855 "data_offset": 2048, 00:21:19.855 "data_size": 63488 00:21:19.855 } 00:21:19.855 ] 00:21:19.855 }' 00:21:19.855 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.855 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:20.423 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:20.423 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:20.423 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:20.423 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:20.423 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:20.423 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.423 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.423 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:20.423 "name": "raid_bdev1", 00:21:20.423 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:20.423 "strip_size_kb": 0, 00:21:20.423 "state": "online", 00:21:20.423 "raid_level": "raid1", 00:21:20.423 "superblock": true, 00:21:20.423 "num_base_bdevs": 4, 00:21:20.423 "num_base_bdevs_discovered": 3, 00:21:20.423 "num_base_bdevs_operational": 3, 00:21:20.423 "base_bdevs_list": [ 00:21:20.423 { 00:21:20.423 "name": null, 00:21:20.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.423 "is_configured": false, 00:21:20.423 "data_offset": 2048, 00:21:20.423 "data_size": 63488 00:21:20.423 }, 00:21:20.423 { 00:21:20.423 "name": "BaseBdev2", 00:21:20.423 "uuid": "f1cb1959-4b97-5d48-b6ca-c1431dcf7ea6", 00:21:20.423 "is_configured": true, 00:21:20.423 "data_offset": 2048, 00:21:20.423 "data_size": 63488 00:21:20.423 }, 00:21:20.423 { 00:21:20.423 "name": "BaseBdev3", 00:21:20.423 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:20.423 "is_configured": true, 00:21:20.423 "data_offset": 2048, 00:21:20.423 "data_size": 63488 00:21:20.423 }, 00:21:20.423 { 00:21:20.423 "name": "BaseBdev4", 00:21:20.423 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:20.423 "is_configured": true, 00:21:20.423 "data_offset": 2048, 00:21:20.423 "data_size": 63488 00:21:20.423 } 00:21:20.423 ] 00:21:20.423 }' 00:21:20.423 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:20.423 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:20.423 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:20.423 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:20.423 18:24:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:20.683 [2024-07-24 18:24:29.157840] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.683 [2024-07-24 18:24:29.188739] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c7b9c0 00:21:20.683 [2024-07-24 18:24:29.189838] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:20.683 18:24:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:20.942 [2024-07-24 18:24:29.291332] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:20.942 [2024-07-24 18:24:29.291573] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:20.942 [2024-07-24 18:24:29.401690] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:20.942 [2024-07-24 18:24:29.401876] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:21.201 [2024-07-24 18:24:29.645347] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:21.459 [2024-07-24 18:24:29.861999] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:21.718 [2024-07-24 18:24:30.189814] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:21.718 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.718 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:21.718 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:21.718 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:21.718 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:21.718 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.718 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.978 [2024-07-24 18:24:30.322144] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:21.978 [2024-07-24 18:24:30.322334] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:21.978 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:21.978 "name": "raid_bdev1", 00:21:21.978 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:21.978 "strip_size_kb": 0, 00:21:21.978 "state": "online", 00:21:21.978 "raid_level": "raid1", 00:21:21.978 "superblock": true, 00:21:21.978 "num_base_bdevs": 4, 00:21:21.978 "num_base_bdevs_discovered": 4, 00:21:21.978 "num_base_bdevs_operational": 4, 00:21:21.978 "process": { 00:21:21.978 "type": "rebuild", 00:21:21.978 "target": "spare", 00:21:21.978 "progress": { 00:21:21.978 "blocks": 16384, 00:21:21.978 "percent": 25 00:21:21.978 } 00:21:21.978 }, 00:21:21.978 "base_bdevs_list": [ 00:21:21.978 { 00:21:21.979 "name": "spare", 00:21:21.979 "uuid": "a99fb58f-df27-507d-945d-7d5dacf06cf5", 00:21:21.979 "is_configured": true, 00:21:21.979 "data_offset": 2048, 00:21:21.979 "data_size": 63488 00:21:21.979 }, 00:21:21.979 { 00:21:21.979 "name": "BaseBdev2", 00:21:21.979 "uuid": "f1cb1959-4b97-5d48-b6ca-c1431dcf7ea6", 00:21:21.979 "is_configured": true, 00:21:21.979 "data_offset": 2048, 00:21:21.979 "data_size": 63488 00:21:21.979 }, 00:21:21.979 { 00:21:21.979 "name": "BaseBdev3", 00:21:21.979 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:21.979 "is_configured": true, 00:21:21.979 "data_offset": 2048, 00:21:21.979 "data_size": 63488 00:21:21.979 }, 00:21:21.979 { 00:21:21.979 "name": "BaseBdev4", 00:21:21.979 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:21.979 "is_configured": true, 00:21:21.979 "data_offset": 2048, 00:21:21.979 "data_size": 63488 00:21:21.979 } 00:21:21.979 ] 00:21:21.979 }' 00:21:21.979 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:21.979 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.979 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:21.979 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.979 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:21:21.979 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:21:21.979 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:21:21.979 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:21:21.979 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:21:21.979 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:21:21.979 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:22.237 [2024-07-24 18:24:30.610001] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:22.237 [2024-07-24 18:24:30.649965] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:22.496 [2024-07-24 18:24:30.857190] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1c671d0 00:21:22.496 [2024-07-24 18:24:30.857210] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1c7b9c0 00:21:22.496 [2024-07-24 18:24:30.857241] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:22.496 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:21:22.496 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:21:22.496 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.496 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:22.496 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:22.496 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:22.496 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:22.496 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.496 18:24:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.496 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:22.496 "name": "raid_bdev1", 00:21:22.496 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:22.496 "strip_size_kb": 0, 00:21:22.497 "state": "online", 00:21:22.497 "raid_level": "raid1", 00:21:22.497 "superblock": true, 00:21:22.497 "num_base_bdevs": 4, 00:21:22.497 "num_base_bdevs_discovered": 3, 00:21:22.497 "num_base_bdevs_operational": 3, 00:21:22.497 "process": { 00:21:22.497 "type": "rebuild", 00:21:22.497 "target": "spare", 00:21:22.497 "progress": { 00:21:22.497 "blocks": 20480, 00:21:22.497 "percent": 32 00:21:22.497 } 00:21:22.497 }, 00:21:22.497 "base_bdevs_list": [ 00:21:22.497 { 00:21:22.497 "name": "spare", 00:21:22.497 "uuid": "a99fb58f-df27-507d-945d-7d5dacf06cf5", 00:21:22.497 "is_configured": true, 00:21:22.497 "data_offset": 2048, 00:21:22.497 "data_size": 63488 00:21:22.497 }, 00:21:22.497 { 00:21:22.497 "name": null, 00:21:22.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.497 "is_configured": false, 00:21:22.497 "data_offset": 2048, 00:21:22.497 "data_size": 63488 00:21:22.497 }, 00:21:22.497 { 00:21:22.497 "name": "BaseBdev3", 00:21:22.497 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:22.497 "is_configured": true, 00:21:22.497 "data_offset": 2048, 00:21:22.497 "data_size": 63488 00:21:22.497 }, 00:21:22.497 { 00:21:22.497 "name": "BaseBdev4", 00:21:22.497 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:22.497 "is_configured": true, 00:21:22.497 "data_offset": 2048, 00:21:22.497 "data_size": 63488 00:21:22.497 } 00:21:22.497 ] 00:21:22.497 }' 00:21:22.497 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:22.497 [2024-07-24 18:24:31.081649] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=735 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:22.755 "name": "raid_bdev1", 00:21:22.755 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:22.755 "strip_size_kb": 0, 00:21:22.755 "state": "online", 00:21:22.755 "raid_level": "raid1", 00:21:22.755 "superblock": true, 00:21:22.755 "num_base_bdevs": 4, 00:21:22.755 "num_base_bdevs_discovered": 3, 00:21:22.755 "num_base_bdevs_operational": 3, 00:21:22.755 "process": { 00:21:22.755 "type": "rebuild", 00:21:22.755 "target": "spare", 00:21:22.755 "progress": { 00:21:22.755 "blocks": 22528, 00:21:22.755 "percent": 35 00:21:22.755 } 00:21:22.755 }, 00:21:22.755 "base_bdevs_list": [ 00:21:22.755 { 00:21:22.755 "name": "spare", 00:21:22.755 "uuid": "a99fb58f-df27-507d-945d-7d5dacf06cf5", 00:21:22.755 "is_configured": true, 00:21:22.755 "data_offset": 2048, 00:21:22.755 "data_size": 63488 00:21:22.755 }, 00:21:22.755 { 00:21:22.755 "name": null, 00:21:22.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.755 "is_configured": false, 00:21:22.755 "data_offset": 2048, 00:21:22.755 "data_size": 63488 00:21:22.755 }, 00:21:22.755 { 00:21:22.755 "name": "BaseBdev3", 00:21:22.755 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:22.755 "is_configured": true, 00:21:22.755 "data_offset": 2048, 00:21:22.755 "data_size": 63488 00:21:22.755 }, 00:21:22.755 { 00:21:22.755 "name": "BaseBdev4", 00:21:22.755 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:22.755 "is_configured": true, 00:21:22.755 "data_offset": 2048, 00:21:22.755 "data_size": 63488 00:21:22.755 } 00:21:22.755 ] 00:21:22.755 }' 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.755 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:23.015 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.015 18:24:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:23.015 [2024-07-24 18:24:31.418389] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:23.583 [2024-07-24 18:24:31.888555] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:23.842 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:23.842 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.842 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:23.842 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:23.842 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:23.842 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:23.842 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.842 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.131 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:24.131 "name": "raid_bdev1", 00:21:24.131 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:24.131 "strip_size_kb": 0, 00:21:24.131 "state": "online", 00:21:24.131 "raid_level": "raid1", 00:21:24.131 "superblock": true, 00:21:24.131 "num_base_bdevs": 4, 00:21:24.131 "num_base_bdevs_discovered": 3, 00:21:24.131 "num_base_bdevs_operational": 3, 00:21:24.131 "process": { 00:21:24.131 "type": "rebuild", 00:21:24.131 "target": "spare", 00:21:24.131 "progress": { 00:21:24.131 "blocks": 45056, 00:21:24.131 "percent": 70 00:21:24.131 } 00:21:24.131 }, 00:21:24.131 "base_bdevs_list": [ 00:21:24.131 { 00:21:24.131 "name": "spare", 00:21:24.131 "uuid": "a99fb58f-df27-507d-945d-7d5dacf06cf5", 00:21:24.131 "is_configured": true, 00:21:24.131 "data_offset": 2048, 00:21:24.131 "data_size": 63488 00:21:24.131 }, 00:21:24.131 { 00:21:24.131 "name": null, 00:21:24.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.131 "is_configured": false, 00:21:24.131 "data_offset": 2048, 00:21:24.131 "data_size": 63488 00:21:24.131 }, 00:21:24.131 { 00:21:24.131 "name": "BaseBdev3", 00:21:24.131 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:24.131 "is_configured": true, 00:21:24.131 "data_offset": 2048, 00:21:24.131 "data_size": 63488 00:21:24.131 }, 00:21:24.131 { 00:21:24.131 "name": "BaseBdev4", 00:21:24.131 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:24.131 "is_configured": true, 00:21:24.131 "data_offset": 2048, 00:21:24.131 "data_size": 63488 00:21:24.131 } 00:21:24.131 ] 00:21:24.131 }' 00:21:24.131 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:24.131 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.131 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:24.131 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.131 18:24:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:24.390 [2024-07-24 18:24:32.893348] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:24.649 [2024-07-24 18:24:33.105859] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:24.908 [2024-07-24 18:24:33.440959] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:25.166 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:25.166 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.166 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:25.166 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:25.166 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:25.166 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:25.166 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.166 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.425 [2024-07-24 18:24:33.763388] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:25.425 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:25.425 "name": "raid_bdev1", 00:21:25.425 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:25.425 "strip_size_kb": 0, 00:21:25.425 "state": "online", 00:21:25.425 "raid_level": "raid1", 00:21:25.425 "superblock": true, 00:21:25.425 "num_base_bdevs": 4, 00:21:25.425 "num_base_bdevs_discovered": 3, 00:21:25.425 "num_base_bdevs_operational": 3, 00:21:25.425 "process": { 00:21:25.425 "type": "rebuild", 00:21:25.425 "target": "spare", 00:21:25.425 "progress": { 00:21:25.425 "blocks": 63488, 00:21:25.425 "percent": 100 00:21:25.425 } 00:21:25.425 }, 00:21:25.425 "base_bdevs_list": [ 00:21:25.425 { 00:21:25.425 "name": "spare", 00:21:25.425 "uuid": "a99fb58f-df27-507d-945d-7d5dacf06cf5", 00:21:25.425 "is_configured": true, 00:21:25.425 "data_offset": 2048, 00:21:25.425 "data_size": 63488 00:21:25.425 }, 00:21:25.425 { 00:21:25.425 "name": null, 00:21:25.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.425 "is_configured": false, 00:21:25.425 "data_offset": 2048, 00:21:25.425 "data_size": 63488 00:21:25.425 }, 00:21:25.425 { 00:21:25.425 "name": "BaseBdev3", 00:21:25.425 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:25.425 "is_configured": true, 00:21:25.425 "data_offset": 2048, 00:21:25.425 "data_size": 63488 00:21:25.425 }, 00:21:25.425 { 00:21:25.425 "name": "BaseBdev4", 00:21:25.425 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:25.425 "is_configured": true, 00:21:25.425 "data_offset": 2048, 00:21:25.425 "data_size": 63488 00:21:25.425 } 00:21:25.425 ] 00:21:25.425 }' 00:21:25.425 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:25.425 [2024-07-24 18:24:33.866550] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:25.425 [2024-07-24 18:24:33.869658] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.425 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:25.425 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:25.425 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.425 18:24:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:26.362 18:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:26.362 18:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.362 18:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:26.362 18:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:26.362 18:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:26.362 18:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:26.362 18:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.362 18:24:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.621 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:26.621 "name": "raid_bdev1", 00:21:26.621 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:26.621 "strip_size_kb": 0, 00:21:26.621 "state": "online", 00:21:26.621 "raid_level": "raid1", 00:21:26.621 "superblock": true, 00:21:26.621 "num_base_bdevs": 4, 00:21:26.621 "num_base_bdevs_discovered": 3, 00:21:26.621 "num_base_bdevs_operational": 3, 00:21:26.621 "base_bdevs_list": [ 00:21:26.621 { 00:21:26.621 "name": "spare", 00:21:26.621 "uuid": "a99fb58f-df27-507d-945d-7d5dacf06cf5", 00:21:26.621 "is_configured": true, 00:21:26.621 "data_offset": 2048, 00:21:26.621 "data_size": 63488 00:21:26.621 }, 00:21:26.621 { 00:21:26.621 "name": null, 00:21:26.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.621 "is_configured": false, 00:21:26.621 "data_offset": 2048, 00:21:26.621 "data_size": 63488 00:21:26.621 }, 00:21:26.621 { 00:21:26.621 "name": "BaseBdev3", 00:21:26.621 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:26.621 "is_configured": true, 00:21:26.621 "data_offset": 2048, 00:21:26.621 "data_size": 63488 00:21:26.621 }, 00:21:26.621 { 00:21:26.621 "name": "BaseBdev4", 00:21:26.621 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:26.621 "is_configured": true, 00:21:26.621 "data_offset": 2048, 00:21:26.621 "data_size": 63488 00:21:26.621 } 00:21:26.621 ] 00:21:26.621 }' 00:21:26.621 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:26.621 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:26.621 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:26.621 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:26.621 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:21:26.621 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:26.621 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:26.621 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:26.621 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:26.621 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:26.621 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.621 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:26.881 "name": "raid_bdev1", 00:21:26.881 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:26.881 "strip_size_kb": 0, 00:21:26.881 "state": "online", 00:21:26.881 "raid_level": "raid1", 00:21:26.881 "superblock": true, 00:21:26.881 "num_base_bdevs": 4, 00:21:26.881 "num_base_bdevs_discovered": 3, 00:21:26.881 "num_base_bdevs_operational": 3, 00:21:26.881 "base_bdevs_list": [ 00:21:26.881 { 00:21:26.881 "name": "spare", 00:21:26.881 "uuid": "a99fb58f-df27-507d-945d-7d5dacf06cf5", 00:21:26.881 "is_configured": true, 00:21:26.881 "data_offset": 2048, 00:21:26.881 "data_size": 63488 00:21:26.881 }, 00:21:26.881 { 00:21:26.881 "name": null, 00:21:26.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.881 "is_configured": false, 00:21:26.881 "data_offset": 2048, 00:21:26.881 "data_size": 63488 00:21:26.881 }, 00:21:26.881 { 00:21:26.881 "name": "BaseBdev3", 00:21:26.881 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:26.881 "is_configured": true, 00:21:26.881 "data_offset": 2048, 00:21:26.881 "data_size": 63488 00:21:26.881 }, 00:21:26.881 { 00:21:26.881 "name": "BaseBdev4", 00:21:26.881 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:26.881 "is_configured": true, 00:21:26.881 "data_offset": 2048, 00:21:26.881 "data_size": 63488 00:21:26.881 } 00:21:26.881 ] 00:21:26.881 }' 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.881 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.140 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.140 "name": "raid_bdev1", 00:21:27.140 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:27.140 "strip_size_kb": 0, 00:21:27.140 "state": "online", 00:21:27.140 "raid_level": "raid1", 00:21:27.140 "superblock": true, 00:21:27.140 "num_base_bdevs": 4, 00:21:27.140 "num_base_bdevs_discovered": 3, 00:21:27.140 "num_base_bdevs_operational": 3, 00:21:27.140 "base_bdevs_list": [ 00:21:27.140 { 00:21:27.140 "name": "spare", 00:21:27.140 "uuid": "a99fb58f-df27-507d-945d-7d5dacf06cf5", 00:21:27.140 "is_configured": true, 00:21:27.140 "data_offset": 2048, 00:21:27.140 "data_size": 63488 00:21:27.140 }, 00:21:27.140 { 00:21:27.140 "name": null, 00:21:27.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.140 "is_configured": false, 00:21:27.140 "data_offset": 2048, 00:21:27.140 "data_size": 63488 00:21:27.140 }, 00:21:27.140 { 00:21:27.140 "name": "BaseBdev3", 00:21:27.140 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:27.140 "is_configured": true, 00:21:27.140 "data_offset": 2048, 00:21:27.140 "data_size": 63488 00:21:27.140 }, 00:21:27.140 { 00:21:27.140 "name": "BaseBdev4", 00:21:27.140 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:27.140 "is_configured": true, 00:21:27.140 "data_offset": 2048, 00:21:27.140 "data_size": 63488 00:21:27.140 } 00:21:27.140 ] 00:21:27.140 }' 00:21:27.140 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.140 18:24:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:27.707 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:27.707 [2024-07-24 18:24:36.277974] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:27.707 [2024-07-24 18:24:36.278000] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:27.966 00:21:27.966 Latency(us) 00:21:27.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.966 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:27.966 raid_bdev1 : 10.86 98.59 295.76 0.00 0.00 14058.61 245.76 113246.21 00:21:27.966 =================================================================================================================== 00:21:27.966 Total : 98.59 295.76 0.00 0.00 14058.61 245.76 113246.21 00:21:27.966 [2024-07-24 18:24:36.340834] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.966 [2024-07-24 18:24:36.340855] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:27.966 [2024-07-24 18:24:36.340919] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:27.966 [2024-07-24 18:24:36.340926] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1aca370 name raid_bdev1, state offline 00:21:27.966 0 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:27.966 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:28.225 /dev/nbd0 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:28.225 1+0 records in 00:21:28.225 1+0 records out 00:21:28.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271926 s, 15.1 MB/s 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.225 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:28.484 /dev/nbd1 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:28.484 1+0 records in 00:21:28.484 1+0 records out 00:21:28.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257865 s, 15.9 MB/s 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:28.484 18:24:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:28.743 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:28.743 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:28.743 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:28.743 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:28.743 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:28.743 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:28.743 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:28.743 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:28.743 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:28.743 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:21:28.743 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:28.743 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:28.744 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:28.744 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:28.744 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:28.744 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:28.744 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:28.744 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:28.744 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.744 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:29.002 /dev/nbd1 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:29.002 1+0 records in 00:21:29.002 1+0 records out 00:21:29.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251044 s, 16.3 MB/s 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.002 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:21:29.261 18:24:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:29.519 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:29.778 [2024-07-24 18:24:38.164714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:29.778 [2024-07-24 18:24:38.164755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.778 [2024-07-24 18:24:38.164769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b62c50 00:21:29.778 [2024-07-24 18:24:38.164778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.778 [2024-07-24 18:24:38.165966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.778 [2024-07-24 18:24:38.165989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:29.778 [2024-07-24 18:24:38.166054] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:29.778 [2024-07-24 18:24:38.166073] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:29.778 [2024-07-24 18:24:38.166148] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:29.778 [2024-07-24 18:24:38.166194] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:29.778 spare 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.778 [2024-07-24 18:24:38.266485] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b65310 00:21:29.778 [2024-07-24 18:24:38.266496] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:29.778 [2024-07-24 18:24:38.266637] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1aca280 00:21:29.778 [2024-07-24 18:24:38.266742] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b65310 00:21:29.778 [2024-07-24 18:24:38.266749] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b65310 00:21:29.778 [2024-07-24 18:24:38.266822] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:29.778 "name": "raid_bdev1", 00:21:29.778 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:29.778 "strip_size_kb": 0, 00:21:29.778 "state": "online", 00:21:29.778 "raid_level": "raid1", 00:21:29.778 "superblock": true, 00:21:29.778 "num_base_bdevs": 4, 00:21:29.778 "num_base_bdevs_discovered": 3, 00:21:29.778 "num_base_bdevs_operational": 3, 00:21:29.778 "base_bdevs_list": [ 00:21:29.778 { 00:21:29.778 "name": "spare", 00:21:29.778 "uuid": "a99fb58f-df27-507d-945d-7d5dacf06cf5", 00:21:29.778 "is_configured": true, 00:21:29.778 "data_offset": 2048, 00:21:29.778 "data_size": 63488 00:21:29.778 }, 00:21:29.778 { 00:21:29.778 "name": null, 00:21:29.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.778 "is_configured": false, 00:21:29.778 "data_offset": 2048, 00:21:29.778 "data_size": 63488 00:21:29.778 }, 00:21:29.778 { 00:21:29.778 "name": "BaseBdev3", 00:21:29.778 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:29.778 "is_configured": true, 00:21:29.778 "data_offset": 2048, 00:21:29.778 "data_size": 63488 00:21:29.778 }, 00:21:29.778 { 00:21:29.778 "name": "BaseBdev4", 00:21:29.778 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:29.778 "is_configured": true, 00:21:29.778 "data_offset": 2048, 00:21:29.778 "data_size": 63488 00:21:29.778 } 00:21:29.778 ] 00:21:29.778 }' 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:29.778 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:30.345 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:30.345 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:30.345 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:30.345 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:30.345 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:30.345 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.345 18:24:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.604 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:30.604 "name": "raid_bdev1", 00:21:30.604 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:30.604 "strip_size_kb": 0, 00:21:30.604 "state": "online", 00:21:30.604 "raid_level": "raid1", 00:21:30.604 "superblock": true, 00:21:30.604 "num_base_bdevs": 4, 00:21:30.604 "num_base_bdevs_discovered": 3, 00:21:30.604 "num_base_bdevs_operational": 3, 00:21:30.604 "base_bdevs_list": [ 00:21:30.604 { 00:21:30.604 "name": "spare", 00:21:30.604 "uuid": "a99fb58f-df27-507d-945d-7d5dacf06cf5", 00:21:30.604 "is_configured": true, 00:21:30.604 "data_offset": 2048, 00:21:30.604 "data_size": 63488 00:21:30.604 }, 00:21:30.604 { 00:21:30.604 "name": null, 00:21:30.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.604 "is_configured": false, 00:21:30.604 "data_offset": 2048, 00:21:30.604 "data_size": 63488 00:21:30.604 }, 00:21:30.604 { 00:21:30.604 "name": "BaseBdev3", 00:21:30.604 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:30.604 "is_configured": true, 00:21:30.604 "data_offset": 2048, 00:21:30.604 "data_size": 63488 00:21:30.604 }, 00:21:30.604 { 00:21:30.604 "name": "BaseBdev4", 00:21:30.604 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:30.604 "is_configured": true, 00:21:30.604 "data_offset": 2048, 00:21:30.604 "data_size": 63488 00:21:30.604 } 00:21:30.604 ] 00:21:30.604 }' 00:21:30.604 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:30.604 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:30.604 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:30.604 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:30.604 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.604 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:30.863 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.863 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:31.122 [2024-07-24 18:24:39.468236] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:31.122 "name": "raid_bdev1", 00:21:31.122 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:31.122 "strip_size_kb": 0, 00:21:31.122 "state": "online", 00:21:31.122 "raid_level": "raid1", 00:21:31.122 "superblock": true, 00:21:31.122 "num_base_bdevs": 4, 00:21:31.122 "num_base_bdevs_discovered": 2, 00:21:31.122 "num_base_bdevs_operational": 2, 00:21:31.122 "base_bdevs_list": [ 00:21:31.122 { 00:21:31.122 "name": null, 00:21:31.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.122 "is_configured": false, 00:21:31.122 "data_offset": 2048, 00:21:31.122 "data_size": 63488 00:21:31.122 }, 00:21:31.122 { 00:21:31.122 "name": null, 00:21:31.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.122 "is_configured": false, 00:21:31.122 "data_offset": 2048, 00:21:31.122 "data_size": 63488 00:21:31.122 }, 00:21:31.122 { 00:21:31.122 "name": "BaseBdev3", 00:21:31.122 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:31.122 "is_configured": true, 00:21:31.122 "data_offset": 2048, 00:21:31.122 "data_size": 63488 00:21:31.122 }, 00:21:31.122 { 00:21:31.122 "name": "BaseBdev4", 00:21:31.122 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:31.122 "is_configured": true, 00:21:31.122 "data_offset": 2048, 00:21:31.122 "data_size": 63488 00:21:31.122 } 00:21:31.122 ] 00:21:31.122 }' 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:31.122 18:24:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:31.690 18:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:31.690 [2024-07-24 18:24:40.278417] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:31.690 [2024-07-24 18:24:40.278545] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:31.690 [2024-07-24 18:24:40.278557] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:31.690 [2024-07-24 18:24:40.278579] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:31.690 [2024-07-24 18:24:40.282529] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1aca280 00:21:31.690 [2024-07-24 18:24:40.284292] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:31.948 18:24:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:21:32.885 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.885 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:32.885 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:32.885 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:32.885 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:32.885 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.885 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.885 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:32.885 "name": "raid_bdev1", 00:21:32.885 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:32.885 "strip_size_kb": 0, 00:21:32.885 "state": "online", 00:21:32.885 "raid_level": "raid1", 00:21:32.885 "superblock": true, 00:21:32.885 "num_base_bdevs": 4, 00:21:32.885 "num_base_bdevs_discovered": 3, 00:21:32.885 "num_base_bdevs_operational": 3, 00:21:32.885 "process": { 00:21:32.885 "type": "rebuild", 00:21:32.885 "target": "spare", 00:21:32.885 "progress": { 00:21:32.885 "blocks": 22528, 00:21:32.885 "percent": 35 00:21:32.885 } 00:21:32.885 }, 00:21:32.885 "base_bdevs_list": [ 00:21:32.885 { 00:21:32.885 "name": "spare", 00:21:32.885 "uuid": "a99fb58f-df27-507d-945d-7d5dacf06cf5", 00:21:32.885 "is_configured": true, 00:21:32.885 "data_offset": 2048, 00:21:32.885 "data_size": 63488 00:21:32.885 }, 00:21:32.885 { 00:21:32.885 "name": null, 00:21:32.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.885 "is_configured": false, 00:21:32.885 "data_offset": 2048, 00:21:32.885 "data_size": 63488 00:21:32.885 }, 00:21:32.885 { 00:21:32.885 "name": "BaseBdev3", 00:21:32.885 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:32.885 "is_configured": true, 00:21:32.885 "data_offset": 2048, 00:21:32.885 "data_size": 63488 00:21:32.885 }, 00:21:32.885 { 00:21:32.885 "name": "BaseBdev4", 00:21:32.885 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:32.885 "is_configured": true, 00:21:32.885 "data_offset": 2048, 00:21:32.885 "data_size": 63488 00:21:32.885 } 00:21:32.885 ] 00:21:32.885 }' 00:21:33.145 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:33.145 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.145 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:33.145 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.145 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:33.145 [2024-07-24 18:24:41.714941] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:33.405 [2024-07-24 18:24:41.794695] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:33.405 [2024-07-24 18:24:41.794732] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.405 [2024-07-24 18:24:41.794759] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:33.405 [2024-07-24 18:24:41.794765] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.405 "name": "raid_bdev1", 00:21:33.405 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:33.405 "strip_size_kb": 0, 00:21:33.405 "state": "online", 00:21:33.405 "raid_level": "raid1", 00:21:33.405 "superblock": true, 00:21:33.405 "num_base_bdevs": 4, 00:21:33.405 "num_base_bdevs_discovered": 2, 00:21:33.405 "num_base_bdevs_operational": 2, 00:21:33.405 "base_bdevs_list": [ 00:21:33.405 { 00:21:33.405 "name": null, 00:21:33.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.405 "is_configured": false, 00:21:33.405 "data_offset": 2048, 00:21:33.405 "data_size": 63488 00:21:33.405 }, 00:21:33.405 { 00:21:33.405 "name": null, 00:21:33.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.405 "is_configured": false, 00:21:33.405 "data_offset": 2048, 00:21:33.405 "data_size": 63488 00:21:33.405 }, 00:21:33.405 { 00:21:33.405 "name": "BaseBdev3", 00:21:33.405 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:33.405 "is_configured": true, 00:21:33.405 "data_offset": 2048, 00:21:33.405 "data_size": 63488 00:21:33.405 }, 00:21:33.405 { 00:21:33.405 "name": "BaseBdev4", 00:21:33.405 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:33.405 "is_configured": true, 00:21:33.405 "data_offset": 2048, 00:21:33.405 "data_size": 63488 00:21:33.405 } 00:21:33.405 ] 00:21:33.405 }' 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.405 18:24:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:33.974 18:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:34.233 [2024-07-24 18:24:42.656769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:34.233 [2024-07-24 18:24:42.656811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.233 [2024-07-24 18:24:42.656841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ac8d10 00:21:34.233 [2024-07-24 18:24:42.656849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.233 [2024-07-24 18:24:42.657124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.233 [2024-07-24 18:24:42.657136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:34.233 [2024-07-24 18:24:42.657194] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:34.233 [2024-07-24 18:24:42.657201] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:34.233 [2024-07-24 18:24:42.657208] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:34.233 [2024-07-24 18:24:42.657221] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:34.233 [2024-07-24 18:24:42.661141] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ac5300 00:21:34.233 spare 00:21:34.233 [2024-07-24 18:24:42.662136] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:34.233 18:24:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:21:35.170 18:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:35.170 18:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:35.170 18:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:35.170 18:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:35.170 18:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:35.170 18:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.170 18:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.430 18:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:35.430 "name": "raid_bdev1", 00:21:35.430 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:35.430 "strip_size_kb": 0, 00:21:35.430 "state": "online", 00:21:35.430 "raid_level": "raid1", 00:21:35.430 "superblock": true, 00:21:35.430 "num_base_bdevs": 4, 00:21:35.430 "num_base_bdevs_discovered": 3, 00:21:35.430 "num_base_bdevs_operational": 3, 00:21:35.430 "process": { 00:21:35.430 "type": "rebuild", 00:21:35.430 "target": "spare", 00:21:35.430 "progress": { 00:21:35.430 "blocks": 22528, 00:21:35.430 "percent": 35 00:21:35.430 } 00:21:35.430 }, 00:21:35.430 "base_bdevs_list": [ 00:21:35.430 { 00:21:35.430 "name": "spare", 00:21:35.430 "uuid": "a99fb58f-df27-507d-945d-7d5dacf06cf5", 00:21:35.430 "is_configured": true, 00:21:35.430 "data_offset": 2048, 00:21:35.430 "data_size": 63488 00:21:35.430 }, 00:21:35.430 { 00:21:35.430 "name": null, 00:21:35.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.430 "is_configured": false, 00:21:35.430 "data_offset": 2048, 00:21:35.430 "data_size": 63488 00:21:35.430 }, 00:21:35.430 { 00:21:35.430 "name": "BaseBdev3", 00:21:35.430 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:35.430 "is_configured": true, 00:21:35.430 "data_offset": 2048, 00:21:35.430 "data_size": 63488 00:21:35.430 }, 00:21:35.430 { 00:21:35.430 "name": "BaseBdev4", 00:21:35.430 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:35.430 "is_configured": true, 00:21:35.430 "data_offset": 2048, 00:21:35.430 "data_size": 63488 00:21:35.430 } 00:21:35.430 ] 00:21:35.430 }' 00:21:35.430 18:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:35.430 18:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:35.430 18:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:35.430 18:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.430 18:24:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:35.689 [2024-07-24 18:24:44.105586] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:35.689 [2024-07-24 18:24:44.172527] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:35.689 [2024-07-24 18:24:44.172563] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.689 [2024-07-24 18:24:44.172589] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:35.689 [2024-07-24 18:24:44.172595] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:35.689 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:35.689 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:35.689 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:35.689 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:35.689 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:35.689 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:35.689 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.689 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.689 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.689 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.689 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.690 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.949 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:35.949 "name": "raid_bdev1", 00:21:35.949 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:35.949 "strip_size_kb": 0, 00:21:35.949 "state": "online", 00:21:35.949 "raid_level": "raid1", 00:21:35.949 "superblock": true, 00:21:35.949 "num_base_bdevs": 4, 00:21:35.949 "num_base_bdevs_discovered": 2, 00:21:35.949 "num_base_bdevs_operational": 2, 00:21:35.949 "base_bdevs_list": [ 00:21:35.949 { 00:21:35.949 "name": null, 00:21:35.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.949 "is_configured": false, 00:21:35.949 "data_offset": 2048, 00:21:35.949 "data_size": 63488 00:21:35.949 }, 00:21:35.949 { 00:21:35.949 "name": null, 00:21:35.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.949 "is_configured": false, 00:21:35.949 "data_offset": 2048, 00:21:35.949 "data_size": 63488 00:21:35.949 }, 00:21:35.949 { 00:21:35.949 "name": "BaseBdev3", 00:21:35.949 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:35.949 "is_configured": true, 00:21:35.949 "data_offset": 2048, 00:21:35.949 "data_size": 63488 00:21:35.949 }, 00:21:35.949 { 00:21:35.949 "name": "BaseBdev4", 00:21:35.949 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:35.949 "is_configured": true, 00:21:35.949 "data_offset": 2048, 00:21:35.949 "data_size": 63488 00:21:35.949 } 00:21:35.949 ] 00:21:35.949 }' 00:21:35.949 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:35.949 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:36.518 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:36.518 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:36.518 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:36.518 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:36.518 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:36.518 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.518 18:24:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.518 18:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:36.518 "name": "raid_bdev1", 00:21:36.518 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:36.518 "strip_size_kb": 0, 00:21:36.518 "state": "online", 00:21:36.518 "raid_level": "raid1", 00:21:36.518 "superblock": true, 00:21:36.518 "num_base_bdevs": 4, 00:21:36.518 "num_base_bdevs_discovered": 2, 00:21:36.518 "num_base_bdevs_operational": 2, 00:21:36.518 "base_bdevs_list": [ 00:21:36.518 { 00:21:36.518 "name": null, 00:21:36.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.518 "is_configured": false, 00:21:36.518 "data_offset": 2048, 00:21:36.518 "data_size": 63488 00:21:36.518 }, 00:21:36.518 { 00:21:36.518 "name": null, 00:21:36.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.518 "is_configured": false, 00:21:36.518 "data_offset": 2048, 00:21:36.518 "data_size": 63488 00:21:36.518 }, 00:21:36.518 { 00:21:36.518 "name": "BaseBdev3", 00:21:36.518 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:36.518 "is_configured": true, 00:21:36.518 "data_offset": 2048, 00:21:36.518 "data_size": 63488 00:21:36.518 }, 00:21:36.518 { 00:21:36.518 "name": "BaseBdev4", 00:21:36.518 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:36.518 "is_configured": true, 00:21:36.518 "data_offset": 2048, 00:21:36.518 "data_size": 63488 00:21:36.518 } 00:21:36.518 ] 00:21:36.518 }' 00:21:36.518 18:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:36.518 18:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:36.518 18:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:36.778 18:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:36.778 18:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:36.778 18:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:37.037 [2024-07-24 18:24:45.435881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:37.037 [2024-07-24 18:24:45.435917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.037 [2024-07-24 18:24:45.435931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ac7da0 00:21:37.037 [2024-07-24 18:24:45.435939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.037 [2024-07-24 18:24:45.436194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.037 [2024-07-24 18:24:45.436207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:37.037 [2024-07-24 18:24:45.436250] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:37.037 [2024-07-24 18:24:45.436258] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:37.037 [2024-07-24 18:24:45.436265] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:37.037 BaseBdev1 00:21:37.037 18:24:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:21:37.975 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:37.975 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:37.975 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:37.975 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:37.975 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:37.975 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:37.975 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.975 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.975 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.975 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.975 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.975 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.234 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.234 "name": "raid_bdev1", 00:21:38.234 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:38.234 "strip_size_kb": 0, 00:21:38.234 "state": "online", 00:21:38.234 "raid_level": "raid1", 00:21:38.234 "superblock": true, 00:21:38.234 "num_base_bdevs": 4, 00:21:38.234 "num_base_bdevs_discovered": 2, 00:21:38.234 "num_base_bdevs_operational": 2, 00:21:38.234 "base_bdevs_list": [ 00:21:38.234 { 00:21:38.234 "name": null, 00:21:38.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.234 "is_configured": false, 00:21:38.234 "data_offset": 2048, 00:21:38.234 "data_size": 63488 00:21:38.234 }, 00:21:38.234 { 00:21:38.234 "name": null, 00:21:38.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.234 "is_configured": false, 00:21:38.234 "data_offset": 2048, 00:21:38.234 "data_size": 63488 00:21:38.234 }, 00:21:38.234 { 00:21:38.234 "name": "BaseBdev3", 00:21:38.234 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:38.234 "is_configured": true, 00:21:38.234 "data_offset": 2048, 00:21:38.234 "data_size": 63488 00:21:38.234 }, 00:21:38.234 { 00:21:38.234 "name": "BaseBdev4", 00:21:38.234 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:38.234 "is_configured": true, 00:21:38.234 "data_offset": 2048, 00:21:38.234 "data_size": 63488 00:21:38.234 } 00:21:38.234 ] 00:21:38.234 }' 00:21:38.234 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.234 18:24:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:38.836 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:38.836 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:38.836 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:38.836 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:38.836 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:38.836 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.836 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.836 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:38.836 "name": "raid_bdev1", 00:21:38.836 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:38.836 "strip_size_kb": 0, 00:21:38.836 "state": "online", 00:21:38.836 "raid_level": "raid1", 00:21:38.836 "superblock": true, 00:21:38.836 "num_base_bdevs": 4, 00:21:38.836 "num_base_bdevs_discovered": 2, 00:21:38.836 "num_base_bdevs_operational": 2, 00:21:38.836 "base_bdevs_list": [ 00:21:38.836 { 00:21:38.836 "name": null, 00:21:38.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.836 "is_configured": false, 00:21:38.836 "data_offset": 2048, 00:21:38.836 "data_size": 63488 00:21:38.836 }, 00:21:38.836 { 00:21:38.836 "name": null, 00:21:38.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.836 "is_configured": false, 00:21:38.836 "data_offset": 2048, 00:21:38.836 "data_size": 63488 00:21:38.836 }, 00:21:38.836 { 00:21:38.836 "name": "BaseBdev3", 00:21:38.836 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:38.836 "is_configured": true, 00:21:38.836 "data_offset": 2048, 00:21:38.836 "data_size": 63488 00:21:38.836 }, 00:21:38.836 { 00:21:38.836 "name": "BaseBdev4", 00:21:38.836 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:38.836 "is_configured": true, 00:21:38.836 "data_offset": 2048, 00:21:38.836 "data_size": 63488 00:21:38.836 } 00:21:38.836 ] 00:21:38.836 }' 00:21:38.836 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:38.836 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:38.836 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:38.836 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:38.836 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:38.837 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:21:38.837 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:38.837 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:38.837 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.837 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:38.837 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.837 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:38.837 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.837 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:38.837 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:38.837 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:39.096 [2024-07-24 18:24:47.529507] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:39.096 [2024-07-24 18:24:47.529608] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:39.096 [2024-07-24 18:24:47.529618] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:39.096 request: 00:21:39.096 { 00:21:39.096 "base_bdev": "BaseBdev1", 00:21:39.096 "raid_bdev": "raid_bdev1", 00:21:39.096 "method": "bdev_raid_add_base_bdev", 00:21:39.096 "req_id": 1 00:21:39.096 } 00:21:39.096 Got JSON-RPC error response 00:21:39.096 response: 00:21:39.096 { 00:21:39.096 "code": -22, 00:21:39.096 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:39.096 } 00:21:39.096 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:21:39.096 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:39.096 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:39.096 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:39.096 18:24:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:21:40.034 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:40.034 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:40.034 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:40.034 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:40.034 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:40.034 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:40.034 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.034 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.034 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.034 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.034 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.034 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.293 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:40.293 "name": "raid_bdev1", 00:21:40.293 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:40.293 "strip_size_kb": 0, 00:21:40.293 "state": "online", 00:21:40.293 "raid_level": "raid1", 00:21:40.293 "superblock": true, 00:21:40.293 "num_base_bdevs": 4, 00:21:40.293 "num_base_bdevs_discovered": 2, 00:21:40.293 "num_base_bdevs_operational": 2, 00:21:40.293 "base_bdevs_list": [ 00:21:40.293 { 00:21:40.293 "name": null, 00:21:40.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.293 "is_configured": false, 00:21:40.293 "data_offset": 2048, 00:21:40.293 "data_size": 63488 00:21:40.293 }, 00:21:40.293 { 00:21:40.293 "name": null, 00:21:40.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.293 "is_configured": false, 00:21:40.293 "data_offset": 2048, 00:21:40.293 "data_size": 63488 00:21:40.293 }, 00:21:40.293 { 00:21:40.293 "name": "BaseBdev3", 00:21:40.293 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:40.293 "is_configured": true, 00:21:40.293 "data_offset": 2048, 00:21:40.293 "data_size": 63488 00:21:40.293 }, 00:21:40.293 { 00:21:40.293 "name": "BaseBdev4", 00:21:40.293 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:40.293 "is_configured": true, 00:21:40.293 "data_offset": 2048, 00:21:40.293 "data_size": 63488 00:21:40.293 } 00:21:40.293 ] 00:21:40.293 }' 00:21:40.293 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:40.293 18:24:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:40.862 "name": "raid_bdev1", 00:21:40.862 "uuid": "06a1a4f5-526f-4baa-8a35-71ed3b8e6426", 00:21:40.862 "strip_size_kb": 0, 00:21:40.862 "state": "online", 00:21:40.862 "raid_level": "raid1", 00:21:40.862 "superblock": true, 00:21:40.862 "num_base_bdevs": 4, 00:21:40.862 "num_base_bdevs_discovered": 2, 00:21:40.862 "num_base_bdevs_operational": 2, 00:21:40.862 "base_bdevs_list": [ 00:21:40.862 { 00:21:40.862 "name": null, 00:21:40.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.862 "is_configured": false, 00:21:40.862 "data_offset": 2048, 00:21:40.862 "data_size": 63488 00:21:40.862 }, 00:21:40.862 { 00:21:40.862 "name": null, 00:21:40.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.862 "is_configured": false, 00:21:40.862 "data_offset": 2048, 00:21:40.862 "data_size": 63488 00:21:40.862 }, 00:21:40.862 { 00:21:40.862 "name": "BaseBdev3", 00:21:40.862 "uuid": "16c462b8-400c-538e-af00-ae7275135221", 00:21:40.862 "is_configured": true, 00:21:40.862 "data_offset": 2048, 00:21:40.862 "data_size": 63488 00:21:40.862 }, 00:21:40.862 { 00:21:40.862 "name": "BaseBdev4", 00:21:40.862 "uuid": "ad0d9474-fb5d-5a22-9c9c-7f2cd94ed1d0", 00:21:40.862 "is_configured": true, 00:21:40.862 "data_offset": 2048, 00:21:40.862 "data_size": 63488 00:21:40.862 } 00:21:40.862 ] 00:21:40.862 }' 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 2286373 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 2286373 ']' 00:21:40.862 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 2286373 00:21:41.122 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:21:41.122 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.122 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2286373 00:21:41.122 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:41.122 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:41.122 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2286373' 00:21:41.122 killing process with pid 2286373 00:21:41.122 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 2286373 00:21:41.122 Received shutdown signal, test time was about 24.004714 seconds 00:21:41.122 00:21:41.122 Latency(us) 00:21:41.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.122 =================================================================================================================== 00:21:41.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.122 [2024-07-24 18:24:49.511203] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:41.122 [2024-07-24 18:24:49.511284] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:41.122 [2024-07-24 18:24:49.511327] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:41.122 [2024-07-24 18:24:49.511334] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b65310 name raid_bdev1, state offline 00:21:41.122 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 2286373 00:21:41.122 [2024-07-24 18:24:49.544585] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:41.382 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:21:41.382 00:21:41.382 real 0m28.167s 00:21:41.382 user 0m42.470s 00:21:41.382 sys 0m4.242s 00:21:41.382 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:41.382 18:24:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:41.382 ************************************ 00:21:41.382 END TEST raid_rebuild_test_sb_io 00:21:41.382 ************************************ 00:21:41.382 18:24:49 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:21:41.382 18:24:49 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:21:41.382 18:24:49 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:21:41.382 18:24:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:41.382 18:24:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:41.382 18:24:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:41.382 ************************************ 00:21:41.382 START TEST raid_state_function_test_sb_4k 00:21:41.382 ************************************ 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=2291662 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2291662' 00:21:41.382 Process raid pid: 2291662 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 2291662 /var/tmp/spdk-raid.sock 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 2291662 ']' 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:41.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.382 18:24:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.382 [2024-07-24 18:24:49.850688] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:21:41.382 [2024-07-24 18:24:49.850728] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:01.0 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:01.1 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:01.2 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:01.3 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:01.4 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:01.5 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:01.6 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:01.7 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:02.0 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:02.1 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:02.2 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:02.3 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:02.4 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:02.5 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:02.6 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b3:02.7 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:01.0 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:01.1 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:01.2 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:01.3 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:01.4 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:01.5 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:01.6 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:01.7 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:02.0 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:02.1 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:02.2 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:02.3 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:02.4 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:02.5 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:02.6 cannot be used 00:21:41.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:41.382 EAL: Requested device 0000:b5:02.7 cannot be used 00:21:41.382 [2024-07-24 18:24:49.940842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.641 [2024-07-24 18:24:50.011155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.641 [2024-07-24 18:24:50.062566] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:41.641 [2024-07-24 18:24:50.062584] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.209 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:42.209 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:21:42.209 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:42.209 [2024-07-24 18:24:50.794911] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:42.209 [2024-07-24 18:24:50.794939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:42.209 [2024-07-24 18:24:50.794945] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:42.209 [2024-07-24 18:24:50.794953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:42.469 "name": "Existed_Raid", 00:21:42.469 "uuid": "eb546143-0964-4122-8c93-57a094b27780", 00:21:42.469 "strip_size_kb": 0, 00:21:42.469 "state": "configuring", 00:21:42.469 "raid_level": "raid1", 00:21:42.469 "superblock": true, 00:21:42.469 "num_base_bdevs": 2, 00:21:42.469 "num_base_bdevs_discovered": 0, 00:21:42.469 "num_base_bdevs_operational": 2, 00:21:42.469 "base_bdevs_list": [ 00:21:42.469 { 00:21:42.469 "name": "BaseBdev1", 00:21:42.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.469 "is_configured": false, 00:21:42.469 "data_offset": 0, 00:21:42.469 "data_size": 0 00:21:42.469 }, 00:21:42.469 { 00:21:42.469 "name": "BaseBdev2", 00:21:42.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.469 "is_configured": false, 00:21:42.469 "data_offset": 0, 00:21:42.469 "data_size": 0 00:21:42.469 } 00:21:42.469 ] 00:21:42.469 }' 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:42.469 18:24:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.037 18:24:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:43.037 [2024-07-24 18:24:51.608909] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:43.037 [2024-07-24 18:24:51.608924] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdf31a0 name Existed_Raid, state configuring 00:21:43.037 18:24:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:43.296 [2024-07-24 18:24:51.785379] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:43.296 [2024-07-24 18:24:51.785397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:43.296 [2024-07-24 18:24:51.785403] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:43.296 [2024-07-24 18:24:51.785410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:43.296 18:24:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:21:43.555 [2024-07-24 18:24:51.974401] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:43.555 BaseBdev1 00:21:43.555 18:24:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:43.555 18:24:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:43.555 18:24:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:43.555 18:24:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:21:43.555 18:24:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:43.555 18:24:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:43.555 18:24:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:43.814 [ 00:21:43.814 { 00:21:43.814 "name": "BaseBdev1", 00:21:43.814 "aliases": [ 00:21:43.814 "3e779cd8-d221-4f52-b781-fbdb0fc69b90" 00:21:43.814 ], 00:21:43.814 "product_name": "Malloc disk", 00:21:43.814 "block_size": 4096, 00:21:43.814 "num_blocks": 8192, 00:21:43.814 "uuid": "3e779cd8-d221-4f52-b781-fbdb0fc69b90", 00:21:43.814 "assigned_rate_limits": { 00:21:43.814 "rw_ios_per_sec": 0, 00:21:43.814 "rw_mbytes_per_sec": 0, 00:21:43.814 "r_mbytes_per_sec": 0, 00:21:43.814 "w_mbytes_per_sec": 0 00:21:43.814 }, 00:21:43.814 "claimed": true, 00:21:43.814 "claim_type": "exclusive_write", 00:21:43.814 "zoned": false, 00:21:43.814 "supported_io_types": { 00:21:43.814 "read": true, 00:21:43.814 "write": true, 00:21:43.814 "unmap": true, 00:21:43.814 "flush": true, 00:21:43.814 "reset": true, 00:21:43.814 "nvme_admin": false, 00:21:43.814 "nvme_io": false, 00:21:43.814 "nvme_io_md": false, 00:21:43.814 "write_zeroes": true, 00:21:43.814 "zcopy": true, 00:21:43.814 "get_zone_info": false, 00:21:43.814 "zone_management": false, 00:21:43.814 "zone_append": false, 00:21:43.814 "compare": false, 00:21:43.814 "compare_and_write": false, 00:21:43.814 "abort": true, 00:21:43.814 "seek_hole": false, 00:21:43.814 "seek_data": false, 00:21:43.814 "copy": true, 00:21:43.814 "nvme_iov_md": false 00:21:43.814 }, 00:21:43.814 "memory_domains": [ 00:21:43.814 { 00:21:43.814 "dma_device_id": "system", 00:21:43.814 "dma_device_type": 1 00:21:43.814 }, 00:21:43.814 { 00:21:43.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.814 "dma_device_type": 2 00:21:43.814 } 00:21:43.814 ], 00:21:43.814 "driver_specific": {} 00:21:43.814 } 00:21:43.814 ] 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.814 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.073 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.073 "name": "Existed_Raid", 00:21:44.073 "uuid": "52775b48-19f7-4e46-8afb-819f7812e302", 00:21:44.073 "strip_size_kb": 0, 00:21:44.073 "state": "configuring", 00:21:44.073 "raid_level": "raid1", 00:21:44.073 "superblock": true, 00:21:44.073 "num_base_bdevs": 2, 00:21:44.073 "num_base_bdevs_discovered": 1, 00:21:44.073 "num_base_bdevs_operational": 2, 00:21:44.073 "base_bdevs_list": [ 00:21:44.073 { 00:21:44.073 "name": "BaseBdev1", 00:21:44.073 "uuid": "3e779cd8-d221-4f52-b781-fbdb0fc69b90", 00:21:44.073 "is_configured": true, 00:21:44.073 "data_offset": 256, 00:21:44.073 "data_size": 7936 00:21:44.073 }, 00:21:44.073 { 00:21:44.073 "name": "BaseBdev2", 00:21:44.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.073 "is_configured": false, 00:21:44.073 "data_offset": 0, 00:21:44.073 "data_size": 0 00:21:44.073 } 00:21:44.073 ] 00:21:44.073 }' 00:21:44.073 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.073 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.642 18:24:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:44.642 [2024-07-24 18:24:53.145421] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:44.642 [2024-07-24 18:24:53.145449] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdf2a90 name Existed_Raid, state configuring 00:21:44.642 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:44.901 [2024-07-24 18:24:53.313894] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:44.901 [2024-07-24 18:24:53.314955] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:44.901 [2024-07-24 18:24:53.314979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.901 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.160 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.160 "name": "Existed_Raid", 00:21:45.160 "uuid": "8e28ba61-007d-424f-9219-ffd657d3a70c", 00:21:45.160 "strip_size_kb": 0, 00:21:45.160 "state": "configuring", 00:21:45.160 "raid_level": "raid1", 00:21:45.160 "superblock": true, 00:21:45.160 "num_base_bdevs": 2, 00:21:45.160 "num_base_bdevs_discovered": 1, 00:21:45.160 "num_base_bdevs_operational": 2, 00:21:45.160 "base_bdevs_list": [ 00:21:45.160 { 00:21:45.160 "name": "BaseBdev1", 00:21:45.160 "uuid": "3e779cd8-d221-4f52-b781-fbdb0fc69b90", 00:21:45.160 "is_configured": true, 00:21:45.160 "data_offset": 256, 00:21:45.160 "data_size": 7936 00:21:45.160 }, 00:21:45.160 { 00:21:45.160 "name": "BaseBdev2", 00:21:45.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.160 "is_configured": false, 00:21:45.160 "data_offset": 0, 00:21:45.160 "data_size": 0 00:21:45.160 } 00:21:45.160 ] 00:21:45.160 }' 00:21:45.160 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.160 18:24:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.728 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:21:45.728 [2024-07-24 18:24:54.174821] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:45.728 [2024-07-24 18:24:54.174930] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xdf3880 00:21:45.728 [2024-07-24 18:24:54.174939] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:45.728 [2024-07-24 18:24:54.175055] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfa69a0 00:21:45.728 [2024-07-24 18:24:54.175140] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xdf3880 00:21:45.728 [2024-07-24 18:24:54.175147] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xdf3880 00:21:45.728 [2024-07-24 18:24:54.175210] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.728 BaseBdev2 00:21:45.728 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:45.728 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:45.728 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:45.728 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:21:45.728 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:45.728 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:45.728 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:45.987 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:45.987 [ 00:21:45.987 { 00:21:45.987 "name": "BaseBdev2", 00:21:45.987 "aliases": [ 00:21:45.987 "886cddad-ecfc-4516-a024-f96c6cbdbba4" 00:21:45.987 ], 00:21:45.987 "product_name": "Malloc disk", 00:21:45.987 "block_size": 4096, 00:21:45.987 "num_blocks": 8192, 00:21:45.987 "uuid": "886cddad-ecfc-4516-a024-f96c6cbdbba4", 00:21:45.987 "assigned_rate_limits": { 00:21:45.987 "rw_ios_per_sec": 0, 00:21:45.987 "rw_mbytes_per_sec": 0, 00:21:45.987 "r_mbytes_per_sec": 0, 00:21:45.987 "w_mbytes_per_sec": 0 00:21:45.987 }, 00:21:45.987 "claimed": true, 00:21:45.987 "claim_type": "exclusive_write", 00:21:45.987 "zoned": false, 00:21:45.987 "supported_io_types": { 00:21:45.987 "read": true, 00:21:45.987 "write": true, 00:21:45.987 "unmap": true, 00:21:45.987 "flush": true, 00:21:45.987 "reset": true, 00:21:45.987 "nvme_admin": false, 00:21:45.987 "nvme_io": false, 00:21:45.987 "nvme_io_md": false, 00:21:45.987 "write_zeroes": true, 00:21:45.987 "zcopy": true, 00:21:45.987 "get_zone_info": false, 00:21:45.987 "zone_management": false, 00:21:45.987 "zone_append": false, 00:21:45.987 "compare": false, 00:21:45.987 "compare_and_write": false, 00:21:45.987 "abort": true, 00:21:45.987 "seek_hole": false, 00:21:45.988 "seek_data": false, 00:21:45.988 "copy": true, 00:21:45.988 "nvme_iov_md": false 00:21:45.988 }, 00:21:45.988 "memory_domains": [ 00:21:45.988 { 00:21:45.988 "dma_device_id": "system", 00:21:45.988 "dma_device_type": 1 00:21:45.988 }, 00:21:45.988 { 00:21:45.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.988 "dma_device_type": 2 00:21:45.988 } 00:21:45.988 ], 00:21:45.988 "driver_specific": {} 00:21:45.988 } 00:21:45.988 ] 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.988 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.247 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.247 "name": "Existed_Raid", 00:21:46.247 "uuid": "8e28ba61-007d-424f-9219-ffd657d3a70c", 00:21:46.247 "strip_size_kb": 0, 00:21:46.247 "state": "online", 00:21:46.247 "raid_level": "raid1", 00:21:46.247 "superblock": true, 00:21:46.247 "num_base_bdevs": 2, 00:21:46.247 "num_base_bdevs_discovered": 2, 00:21:46.247 "num_base_bdevs_operational": 2, 00:21:46.247 "base_bdevs_list": [ 00:21:46.247 { 00:21:46.247 "name": "BaseBdev1", 00:21:46.247 "uuid": "3e779cd8-d221-4f52-b781-fbdb0fc69b90", 00:21:46.247 "is_configured": true, 00:21:46.247 "data_offset": 256, 00:21:46.247 "data_size": 7936 00:21:46.247 }, 00:21:46.247 { 00:21:46.247 "name": "BaseBdev2", 00:21:46.247 "uuid": "886cddad-ecfc-4516-a024-f96c6cbdbba4", 00:21:46.247 "is_configured": true, 00:21:46.247 "data_offset": 256, 00:21:46.247 "data_size": 7936 00:21:46.247 } 00:21:46.247 ] 00:21:46.247 }' 00:21:46.247 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.247 18:24:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.816 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:46.816 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:46.816 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:46.816 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:46.816 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:46.816 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:21:46.816 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:46.816 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:46.816 [2024-07-24 18:24:55.374077] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:46.816 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:46.816 "name": "Existed_Raid", 00:21:46.816 "aliases": [ 00:21:46.816 "8e28ba61-007d-424f-9219-ffd657d3a70c" 00:21:46.816 ], 00:21:46.816 "product_name": "Raid Volume", 00:21:46.816 "block_size": 4096, 00:21:46.816 "num_blocks": 7936, 00:21:46.816 "uuid": "8e28ba61-007d-424f-9219-ffd657d3a70c", 00:21:46.816 "assigned_rate_limits": { 00:21:46.816 "rw_ios_per_sec": 0, 00:21:46.816 "rw_mbytes_per_sec": 0, 00:21:46.816 "r_mbytes_per_sec": 0, 00:21:46.816 "w_mbytes_per_sec": 0 00:21:46.816 }, 00:21:46.816 "claimed": false, 00:21:46.816 "zoned": false, 00:21:46.816 "supported_io_types": { 00:21:46.816 "read": true, 00:21:46.816 "write": true, 00:21:46.816 "unmap": false, 00:21:46.816 "flush": false, 00:21:46.816 "reset": true, 00:21:46.816 "nvme_admin": false, 00:21:46.816 "nvme_io": false, 00:21:46.816 "nvme_io_md": false, 00:21:46.816 "write_zeroes": true, 00:21:46.816 "zcopy": false, 00:21:46.816 "get_zone_info": false, 00:21:46.816 "zone_management": false, 00:21:46.816 "zone_append": false, 00:21:46.816 "compare": false, 00:21:46.816 "compare_and_write": false, 00:21:46.816 "abort": false, 00:21:46.816 "seek_hole": false, 00:21:46.816 "seek_data": false, 00:21:46.816 "copy": false, 00:21:46.816 "nvme_iov_md": false 00:21:46.816 }, 00:21:46.816 "memory_domains": [ 00:21:46.816 { 00:21:46.816 "dma_device_id": "system", 00:21:46.816 "dma_device_type": 1 00:21:46.816 }, 00:21:46.816 { 00:21:46.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.816 "dma_device_type": 2 00:21:46.816 }, 00:21:46.816 { 00:21:46.816 "dma_device_id": "system", 00:21:46.816 "dma_device_type": 1 00:21:46.816 }, 00:21:46.816 { 00:21:46.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.816 "dma_device_type": 2 00:21:46.816 } 00:21:46.816 ], 00:21:46.816 "driver_specific": { 00:21:46.816 "raid": { 00:21:46.816 "uuid": "8e28ba61-007d-424f-9219-ffd657d3a70c", 00:21:46.816 "strip_size_kb": 0, 00:21:46.816 "state": "online", 00:21:46.816 "raid_level": "raid1", 00:21:46.816 "superblock": true, 00:21:46.816 "num_base_bdevs": 2, 00:21:46.816 "num_base_bdevs_discovered": 2, 00:21:46.816 "num_base_bdevs_operational": 2, 00:21:46.816 "base_bdevs_list": [ 00:21:46.816 { 00:21:46.816 "name": "BaseBdev1", 00:21:46.816 "uuid": "3e779cd8-d221-4f52-b781-fbdb0fc69b90", 00:21:46.816 "is_configured": true, 00:21:46.816 "data_offset": 256, 00:21:46.816 "data_size": 7936 00:21:46.816 }, 00:21:46.816 { 00:21:46.816 "name": "BaseBdev2", 00:21:46.816 "uuid": "886cddad-ecfc-4516-a024-f96c6cbdbba4", 00:21:46.816 "is_configured": true, 00:21:46.816 "data_offset": 256, 00:21:46.816 "data_size": 7936 00:21:46.816 } 00:21:46.816 ] 00:21:46.816 } 00:21:46.816 } 00:21:46.816 }' 00:21:46.816 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:47.076 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:47.076 BaseBdev2' 00:21:47.076 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:47.076 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:47.076 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:47.076 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:47.076 "name": "BaseBdev1", 00:21:47.076 "aliases": [ 00:21:47.076 "3e779cd8-d221-4f52-b781-fbdb0fc69b90" 00:21:47.076 ], 00:21:47.076 "product_name": "Malloc disk", 00:21:47.076 "block_size": 4096, 00:21:47.076 "num_blocks": 8192, 00:21:47.076 "uuid": "3e779cd8-d221-4f52-b781-fbdb0fc69b90", 00:21:47.076 "assigned_rate_limits": { 00:21:47.076 "rw_ios_per_sec": 0, 00:21:47.076 "rw_mbytes_per_sec": 0, 00:21:47.076 "r_mbytes_per_sec": 0, 00:21:47.076 "w_mbytes_per_sec": 0 00:21:47.076 }, 00:21:47.076 "claimed": true, 00:21:47.076 "claim_type": "exclusive_write", 00:21:47.076 "zoned": false, 00:21:47.076 "supported_io_types": { 00:21:47.076 "read": true, 00:21:47.076 "write": true, 00:21:47.076 "unmap": true, 00:21:47.076 "flush": true, 00:21:47.076 "reset": true, 00:21:47.076 "nvme_admin": false, 00:21:47.076 "nvme_io": false, 00:21:47.076 "nvme_io_md": false, 00:21:47.076 "write_zeroes": true, 00:21:47.076 "zcopy": true, 00:21:47.076 "get_zone_info": false, 00:21:47.076 "zone_management": false, 00:21:47.076 "zone_append": false, 00:21:47.076 "compare": false, 00:21:47.076 "compare_and_write": false, 00:21:47.076 "abort": true, 00:21:47.076 "seek_hole": false, 00:21:47.076 "seek_data": false, 00:21:47.076 "copy": true, 00:21:47.076 "nvme_iov_md": false 00:21:47.076 }, 00:21:47.076 "memory_domains": [ 00:21:47.076 { 00:21:47.076 "dma_device_id": "system", 00:21:47.076 "dma_device_type": 1 00:21:47.076 }, 00:21:47.076 { 00:21:47.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.076 "dma_device_type": 2 00:21:47.076 } 00:21:47.076 ], 00:21:47.076 "driver_specific": {} 00:21:47.076 }' 00:21:47.076 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.076 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.336 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:47.336 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.336 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.336 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:47.336 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.336 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.336 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:47.336 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.336 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.595 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:47.595 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:47.595 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:47.595 18:24:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:47.595 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:47.595 "name": "BaseBdev2", 00:21:47.595 "aliases": [ 00:21:47.595 "886cddad-ecfc-4516-a024-f96c6cbdbba4" 00:21:47.595 ], 00:21:47.595 "product_name": "Malloc disk", 00:21:47.595 "block_size": 4096, 00:21:47.595 "num_blocks": 8192, 00:21:47.595 "uuid": "886cddad-ecfc-4516-a024-f96c6cbdbba4", 00:21:47.595 "assigned_rate_limits": { 00:21:47.595 "rw_ios_per_sec": 0, 00:21:47.595 "rw_mbytes_per_sec": 0, 00:21:47.595 "r_mbytes_per_sec": 0, 00:21:47.595 "w_mbytes_per_sec": 0 00:21:47.595 }, 00:21:47.595 "claimed": true, 00:21:47.595 "claim_type": "exclusive_write", 00:21:47.595 "zoned": false, 00:21:47.595 "supported_io_types": { 00:21:47.595 "read": true, 00:21:47.595 "write": true, 00:21:47.595 "unmap": true, 00:21:47.595 "flush": true, 00:21:47.595 "reset": true, 00:21:47.595 "nvme_admin": false, 00:21:47.595 "nvme_io": false, 00:21:47.595 "nvme_io_md": false, 00:21:47.595 "write_zeroes": true, 00:21:47.595 "zcopy": true, 00:21:47.595 "get_zone_info": false, 00:21:47.595 "zone_management": false, 00:21:47.595 "zone_append": false, 00:21:47.595 "compare": false, 00:21:47.595 "compare_and_write": false, 00:21:47.595 "abort": true, 00:21:47.595 "seek_hole": false, 00:21:47.595 "seek_data": false, 00:21:47.595 "copy": true, 00:21:47.595 "nvme_iov_md": false 00:21:47.595 }, 00:21:47.595 "memory_domains": [ 00:21:47.595 { 00:21:47.595 "dma_device_id": "system", 00:21:47.595 "dma_device_type": 1 00:21:47.595 }, 00:21:47.595 { 00:21:47.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.595 "dma_device_type": 2 00:21:47.595 } 00:21:47.595 ], 00:21:47.595 "driver_specific": {} 00:21:47.595 }' 00:21:47.595 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.595 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.855 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:47.855 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.855 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.855 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:47.855 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.855 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.855 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:47.855 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.855 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.855 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:47.855 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:48.114 [2024-07-24 18:24:56.573030] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.114 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.374 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.374 "name": "Existed_Raid", 00:21:48.374 "uuid": "8e28ba61-007d-424f-9219-ffd657d3a70c", 00:21:48.374 "strip_size_kb": 0, 00:21:48.374 "state": "online", 00:21:48.374 "raid_level": "raid1", 00:21:48.374 "superblock": true, 00:21:48.374 "num_base_bdevs": 2, 00:21:48.374 "num_base_bdevs_discovered": 1, 00:21:48.374 "num_base_bdevs_operational": 1, 00:21:48.374 "base_bdevs_list": [ 00:21:48.374 { 00:21:48.374 "name": null, 00:21:48.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.374 "is_configured": false, 00:21:48.374 "data_offset": 256, 00:21:48.374 "data_size": 7936 00:21:48.374 }, 00:21:48.374 { 00:21:48.374 "name": "BaseBdev2", 00:21:48.374 "uuid": "886cddad-ecfc-4516-a024-f96c6cbdbba4", 00:21:48.374 "is_configured": true, 00:21:48.374 "data_offset": 256, 00:21:48.374 "data_size": 7936 00:21:48.374 } 00:21:48.374 ] 00:21:48.374 }' 00:21:48.374 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.374 18:24:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.942 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:48.942 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:48.942 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.942 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:48.942 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:48.942 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:48.942 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:49.202 [2024-07-24 18:24:57.608505] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:49.202 [2024-07-24 18:24:57.608570] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:49.202 [2024-07-24 18:24:57.618329] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:49.202 [2024-07-24 18:24:57.618353] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:49.202 [2024-07-24 18:24:57.618360] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xdf3880 name Existed_Raid, state offline 00:21:49.202 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:49.202 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:49.202 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:49.202 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 2291662 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 2291662 ']' 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 2291662 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2291662 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2291662' 00:21:49.461 killing process with pid 2291662 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 2291662 00:21:49.461 [2024-07-24 18:24:57.859764] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:49.461 18:24:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 2291662 00:21:49.461 [2024-07-24 18:24:57.860552] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:49.461 18:24:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:21:49.461 00:21:49.461 real 0m8.228s 00:21:49.461 user 0m14.412s 00:21:49.461 sys 0m1.665s 00:21:49.461 18:24:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:49.461 18:24:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.461 ************************************ 00:21:49.461 END TEST raid_state_function_test_sb_4k 00:21:49.461 ************************************ 00:21:49.721 18:24:58 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:21:49.721 18:24:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:49.721 18:24:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:49.721 18:24:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:49.721 ************************************ 00:21:49.721 START TEST raid_superblock_test_4k 00:21:49.721 ************************************ 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=2293355 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 2293355 /var/tmp/spdk-raid.sock 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 2293355 ']' 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:49.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:49.721 18:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.721 [2024-07-24 18:24:58.178037] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:21:49.721 [2024-07-24 18:24:58.178087] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2293355 ] 00:21:49.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.721 EAL: Requested device 0000:b3:01.0 cannot be used 00:21:49.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.721 EAL: Requested device 0000:b3:01.1 cannot be used 00:21:49.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.721 EAL: Requested device 0000:b3:01.2 cannot be used 00:21:49.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.721 EAL: Requested device 0000:b3:01.3 cannot be used 00:21:49.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.721 EAL: Requested device 0000:b3:01.4 cannot be used 00:21:49.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b3:01.5 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b3:01.6 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b3:01.7 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b3:02.0 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b3:02.1 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b3:02.2 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b3:02.3 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b3:02.4 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b3:02.5 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b3:02.6 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b3:02.7 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:01.0 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:01.1 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:01.2 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:01.3 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:01.4 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:01.5 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:01.6 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:01.7 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:02.0 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:02.1 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:02.2 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:02.3 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:02.4 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:02.5 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:02.6 cannot be used 00:21:49.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:49.722 EAL: Requested device 0000:b5:02.7 cannot be used 00:21:49.722 [2024-07-24 18:24:58.271671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.982 [2024-07-24 18:24:58.346087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.982 [2024-07-24 18:24:58.402758] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.982 [2024-07-24 18:24:58.402799] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:50.550 18:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:50.550 18:24:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:21:50.550 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:50.550 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:50.550 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:50.550 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:50.550 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:50.550 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:50.550 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:50.550 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:50.550 18:24:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:21:50.550 malloc1 00:21:50.550 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:50.810 [2024-07-24 18:24:59.299229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:50.810 [2024-07-24 18:24:59.299269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.810 [2024-07-24 18:24:59.299281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x125ccb0 00:21:50.810 [2024-07-24 18:24:59.299289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.810 [2024-07-24 18:24:59.300351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.810 [2024-07-24 18:24:59.300373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:50.810 pt1 00:21:50.810 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:50.810 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:50.810 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:50.810 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:50.810 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:50.810 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:50.810 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:50.810 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:50.810 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:21:51.069 malloc2 00:21:51.069 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:51.069 [2024-07-24 18:24:59.659823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:51.069 [2024-07-24 18:24:59.659855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.069 [2024-07-24 18:24:59.659865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x125e0b0 00:21:51.069 [2024-07-24 18:24:59.659873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.069 [2024-07-24 18:24:59.660836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.069 [2024-07-24 18:24:59.660872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:51.069 pt2 00:21:51.327 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:51.327 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:51.327 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:21:51.327 [2024-07-24 18:24:59.828270] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:51.327 [2024-07-24 18:24:59.829000] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:51.327 [2024-07-24 18:24:59.829094] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x14009b0 00:21:51.327 [2024-07-24 18:24:59.829103] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:51.327 [2024-07-24 18:24:59.829211] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1254820 00:21:51.327 [2024-07-24 18:24:59.829303] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14009b0 00:21:51.328 [2024-07-24 18:24:59.829309] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14009b0 00:21:51.328 [2024-07-24 18:24:59.829365] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.328 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:51.328 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:51.328 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:51.328 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:51.328 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:51.328 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:51.328 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:51.328 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:51.328 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:51.328 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:51.328 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.328 18:24:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.586 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:51.586 "name": "raid_bdev1", 00:21:51.586 "uuid": "5c2e95f8-b20b-446e-a21c-769bd599878b", 00:21:51.586 "strip_size_kb": 0, 00:21:51.586 "state": "online", 00:21:51.586 "raid_level": "raid1", 00:21:51.586 "superblock": true, 00:21:51.586 "num_base_bdevs": 2, 00:21:51.586 "num_base_bdevs_discovered": 2, 00:21:51.586 "num_base_bdevs_operational": 2, 00:21:51.586 "base_bdevs_list": [ 00:21:51.586 { 00:21:51.586 "name": "pt1", 00:21:51.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:51.586 "is_configured": true, 00:21:51.586 "data_offset": 256, 00:21:51.586 "data_size": 7936 00:21:51.586 }, 00:21:51.586 { 00:21:51.586 "name": "pt2", 00:21:51.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:51.586 "is_configured": true, 00:21:51.586 "data_offset": 256, 00:21:51.586 "data_size": 7936 00:21:51.586 } 00:21:51.586 ] 00:21:51.586 }' 00:21:51.586 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:51.586 18:25:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.154 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:52.154 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:52.154 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:52.154 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:52.154 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:52.154 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:21:52.154 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:52.154 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:52.154 [2024-07-24 18:25:00.646524] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:52.154 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:52.154 "name": "raid_bdev1", 00:21:52.154 "aliases": [ 00:21:52.154 "5c2e95f8-b20b-446e-a21c-769bd599878b" 00:21:52.154 ], 00:21:52.154 "product_name": "Raid Volume", 00:21:52.154 "block_size": 4096, 00:21:52.154 "num_blocks": 7936, 00:21:52.154 "uuid": "5c2e95f8-b20b-446e-a21c-769bd599878b", 00:21:52.154 "assigned_rate_limits": { 00:21:52.154 "rw_ios_per_sec": 0, 00:21:52.154 "rw_mbytes_per_sec": 0, 00:21:52.154 "r_mbytes_per_sec": 0, 00:21:52.154 "w_mbytes_per_sec": 0 00:21:52.154 }, 00:21:52.154 "claimed": false, 00:21:52.154 "zoned": false, 00:21:52.154 "supported_io_types": { 00:21:52.154 "read": true, 00:21:52.154 "write": true, 00:21:52.154 "unmap": false, 00:21:52.154 "flush": false, 00:21:52.154 "reset": true, 00:21:52.154 "nvme_admin": false, 00:21:52.154 "nvme_io": false, 00:21:52.154 "nvme_io_md": false, 00:21:52.154 "write_zeroes": true, 00:21:52.154 "zcopy": false, 00:21:52.154 "get_zone_info": false, 00:21:52.154 "zone_management": false, 00:21:52.154 "zone_append": false, 00:21:52.154 "compare": false, 00:21:52.154 "compare_and_write": false, 00:21:52.154 "abort": false, 00:21:52.154 "seek_hole": false, 00:21:52.154 "seek_data": false, 00:21:52.154 "copy": false, 00:21:52.154 "nvme_iov_md": false 00:21:52.154 }, 00:21:52.154 "memory_domains": [ 00:21:52.154 { 00:21:52.154 "dma_device_id": "system", 00:21:52.154 "dma_device_type": 1 00:21:52.154 }, 00:21:52.154 { 00:21:52.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.154 "dma_device_type": 2 00:21:52.154 }, 00:21:52.154 { 00:21:52.154 "dma_device_id": "system", 00:21:52.154 "dma_device_type": 1 00:21:52.154 }, 00:21:52.154 { 00:21:52.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.154 "dma_device_type": 2 00:21:52.154 } 00:21:52.154 ], 00:21:52.154 "driver_specific": { 00:21:52.154 "raid": { 00:21:52.154 "uuid": "5c2e95f8-b20b-446e-a21c-769bd599878b", 00:21:52.154 "strip_size_kb": 0, 00:21:52.154 "state": "online", 00:21:52.154 "raid_level": "raid1", 00:21:52.154 "superblock": true, 00:21:52.154 "num_base_bdevs": 2, 00:21:52.154 "num_base_bdevs_discovered": 2, 00:21:52.154 "num_base_bdevs_operational": 2, 00:21:52.154 "base_bdevs_list": [ 00:21:52.154 { 00:21:52.154 "name": "pt1", 00:21:52.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:52.154 "is_configured": true, 00:21:52.154 "data_offset": 256, 00:21:52.154 "data_size": 7936 00:21:52.154 }, 00:21:52.154 { 00:21:52.154 "name": "pt2", 00:21:52.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:52.154 "is_configured": true, 00:21:52.154 "data_offset": 256, 00:21:52.154 "data_size": 7936 00:21:52.154 } 00:21:52.154 ] 00:21:52.154 } 00:21:52.154 } 00:21:52.155 }' 00:21:52.155 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:52.155 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:52.155 pt2' 00:21:52.155 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:52.155 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:52.155 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:52.414 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:52.414 "name": "pt1", 00:21:52.414 "aliases": [ 00:21:52.414 "00000000-0000-0000-0000-000000000001" 00:21:52.414 ], 00:21:52.414 "product_name": "passthru", 00:21:52.414 "block_size": 4096, 00:21:52.414 "num_blocks": 8192, 00:21:52.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:52.414 "assigned_rate_limits": { 00:21:52.414 "rw_ios_per_sec": 0, 00:21:52.414 "rw_mbytes_per_sec": 0, 00:21:52.414 "r_mbytes_per_sec": 0, 00:21:52.414 "w_mbytes_per_sec": 0 00:21:52.414 }, 00:21:52.414 "claimed": true, 00:21:52.414 "claim_type": "exclusive_write", 00:21:52.414 "zoned": false, 00:21:52.414 "supported_io_types": { 00:21:52.414 "read": true, 00:21:52.414 "write": true, 00:21:52.414 "unmap": true, 00:21:52.414 "flush": true, 00:21:52.414 "reset": true, 00:21:52.414 "nvme_admin": false, 00:21:52.414 "nvme_io": false, 00:21:52.414 "nvme_io_md": false, 00:21:52.414 "write_zeroes": true, 00:21:52.414 "zcopy": true, 00:21:52.414 "get_zone_info": false, 00:21:52.414 "zone_management": false, 00:21:52.414 "zone_append": false, 00:21:52.414 "compare": false, 00:21:52.414 "compare_and_write": false, 00:21:52.414 "abort": true, 00:21:52.414 "seek_hole": false, 00:21:52.414 "seek_data": false, 00:21:52.414 "copy": true, 00:21:52.414 "nvme_iov_md": false 00:21:52.414 }, 00:21:52.414 "memory_domains": [ 00:21:52.414 { 00:21:52.414 "dma_device_id": "system", 00:21:52.414 "dma_device_type": 1 00:21:52.414 }, 00:21:52.414 { 00:21:52.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.414 "dma_device_type": 2 00:21:52.414 } 00:21:52.414 ], 00:21:52.414 "driver_specific": { 00:21:52.414 "passthru": { 00:21:52.414 "name": "pt1", 00:21:52.414 "base_bdev_name": "malloc1" 00:21:52.414 } 00:21:52.414 } 00:21:52.414 }' 00:21:52.414 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.414 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.414 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:52.414 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.414 18:25:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.673 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:52.673 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.673 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.673 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:52.673 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.673 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.673 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:52.673 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:52.673 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:52.673 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:52.932 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:52.932 "name": "pt2", 00:21:52.932 "aliases": [ 00:21:52.932 "00000000-0000-0000-0000-000000000002" 00:21:52.932 ], 00:21:52.932 "product_name": "passthru", 00:21:52.932 "block_size": 4096, 00:21:52.932 "num_blocks": 8192, 00:21:52.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:52.932 "assigned_rate_limits": { 00:21:52.932 "rw_ios_per_sec": 0, 00:21:52.932 "rw_mbytes_per_sec": 0, 00:21:52.932 "r_mbytes_per_sec": 0, 00:21:52.932 "w_mbytes_per_sec": 0 00:21:52.932 }, 00:21:52.932 "claimed": true, 00:21:52.932 "claim_type": "exclusive_write", 00:21:52.932 "zoned": false, 00:21:52.932 "supported_io_types": { 00:21:52.932 "read": true, 00:21:52.932 "write": true, 00:21:52.932 "unmap": true, 00:21:52.932 "flush": true, 00:21:52.932 "reset": true, 00:21:52.932 "nvme_admin": false, 00:21:52.932 "nvme_io": false, 00:21:52.932 "nvme_io_md": false, 00:21:52.932 "write_zeroes": true, 00:21:52.932 "zcopy": true, 00:21:52.932 "get_zone_info": false, 00:21:52.932 "zone_management": false, 00:21:52.932 "zone_append": false, 00:21:52.932 "compare": false, 00:21:52.932 "compare_and_write": false, 00:21:52.932 "abort": true, 00:21:52.932 "seek_hole": false, 00:21:52.932 "seek_data": false, 00:21:52.932 "copy": true, 00:21:52.932 "nvme_iov_md": false 00:21:52.932 }, 00:21:52.932 "memory_domains": [ 00:21:52.932 { 00:21:52.932 "dma_device_id": "system", 00:21:52.932 "dma_device_type": 1 00:21:52.932 }, 00:21:52.932 { 00:21:52.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.932 "dma_device_type": 2 00:21:52.932 } 00:21:52.932 ], 00:21:52.932 "driver_specific": { 00:21:52.932 "passthru": { 00:21:52.932 "name": "pt2", 00:21:52.932 "base_bdev_name": "malloc2" 00:21:52.932 } 00:21:52.932 } 00:21:52.932 }' 00:21:52.932 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.932 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.932 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:52.932 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.932 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.932 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:52.932 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:53.247 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:53.247 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:53.247 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.247 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.247 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:53.247 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:53.247 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:53.247 [2024-07-24 18:25:01.793479] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:53.247 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=5c2e95f8-b20b-446e-a21c-769bd599878b 00:21:53.247 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 5c2e95f8-b20b-446e-a21c-769bd599878b ']' 00:21:53.247 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:53.513 [2024-07-24 18:25:01.961762] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:53.513 [2024-07-24 18:25:01.961778] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:53.513 [2024-07-24 18:25:01.961823] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:53.513 [2024-07-24 18:25:01.961864] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:53.513 [2024-07-24 18:25:01.961872] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14009b0 name raid_bdev1, state offline 00:21:53.513 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.513 18:25:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:53.772 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:53.772 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:53.772 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:53.772 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:53.772 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:53.772 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:54.031 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:54.031 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:54.290 [2024-07-24 18:25:02.799923] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:54.290 [2024-07-24 18:25:02.800871] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:54.290 [2024-07-24 18:25:02.800917] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:54.290 [2024-07-24 18:25:02.800946] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:54.290 [2024-07-24 18:25:02.800958] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:54.290 [2024-07-24 18:25:02.800964] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1400730 name raid_bdev1, state configuring 00:21:54.290 request: 00:21:54.290 { 00:21:54.290 "name": "raid_bdev1", 00:21:54.290 "raid_level": "raid1", 00:21:54.290 "base_bdevs": [ 00:21:54.290 "malloc1", 00:21:54.290 "malloc2" 00:21:54.290 ], 00:21:54.290 "superblock": false, 00:21:54.290 "method": "bdev_raid_create", 00:21:54.290 "req_id": 1 00:21:54.290 } 00:21:54.290 Got JSON-RPC error response 00:21:54.290 response: 00:21:54.290 { 00:21:54.290 "code": -17, 00:21:54.290 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:54.290 } 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:54.290 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.550 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:54.550 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:54.550 18:25:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:54.550 [2024-07-24 18:25:03.128733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:54.550 [2024-07-24 18:25:03.128761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.550 [2024-07-24 18:25:03.128775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x125cee0 00:21:54.550 [2024-07-24 18:25:03.128783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.550 [2024-07-24 18:25:03.129848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.550 [2024-07-24 18:25:03.129869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:54.550 [2024-07-24 18:25:03.129913] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:54.550 [2024-07-24 18:25:03.129930] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:54.550 pt1 00:21:54.550 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:54.550 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:54.550 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:54.550 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:54.550 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:54.550 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:54.550 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:54.809 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:54.809 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:54.809 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:54.809 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.809 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.809 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:54.809 "name": "raid_bdev1", 00:21:54.809 "uuid": "5c2e95f8-b20b-446e-a21c-769bd599878b", 00:21:54.809 "strip_size_kb": 0, 00:21:54.809 "state": "configuring", 00:21:54.809 "raid_level": "raid1", 00:21:54.810 "superblock": true, 00:21:54.810 "num_base_bdevs": 2, 00:21:54.810 "num_base_bdevs_discovered": 1, 00:21:54.810 "num_base_bdevs_operational": 2, 00:21:54.810 "base_bdevs_list": [ 00:21:54.810 { 00:21:54.810 "name": "pt1", 00:21:54.810 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:54.810 "is_configured": true, 00:21:54.810 "data_offset": 256, 00:21:54.810 "data_size": 7936 00:21:54.810 }, 00:21:54.810 { 00:21:54.810 "name": null, 00:21:54.810 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:54.810 "is_configured": false, 00:21:54.810 "data_offset": 256, 00:21:54.810 "data_size": 7936 00:21:54.810 } 00:21:54.810 ] 00:21:54.810 }' 00:21:54.810 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:54.810 18:25:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.377 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:21:55.377 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:55.377 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:55.377 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:55.377 [2024-07-24 18:25:03.946889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:55.377 [2024-07-24 18:25:03.946928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.377 [2024-07-24 18:25:03.946941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13f4e30 00:21:55.377 [2024-07-24 18:25:03.946950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.377 [2024-07-24 18:25:03.947219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.377 [2024-07-24 18:25:03.947231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:55.377 [2024-07-24 18:25:03.947276] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:55.377 [2024-07-24 18:25:03.947289] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:55.377 [2024-07-24 18:25:03.947359] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x13f5cd0 00:21:55.377 [2024-07-24 18:25:03.947365] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:55.377 [2024-07-24 18:25:03.947482] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1256c80 00:21:55.377 [2024-07-24 18:25:03.947576] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13f5cd0 00:21:55.377 [2024-07-24 18:25:03.947582] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x13f5cd0 00:21:55.377 [2024-07-24 18:25:03.947658] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.377 pt2 00:21:55.377 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:55.377 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:55.377 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:55.377 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:55.377 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:55.377 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:55.378 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:55.378 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:55.378 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:55.378 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:55.378 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:55.378 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:55.637 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.637 18:25:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.637 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:55.637 "name": "raid_bdev1", 00:21:55.637 "uuid": "5c2e95f8-b20b-446e-a21c-769bd599878b", 00:21:55.637 "strip_size_kb": 0, 00:21:55.637 "state": "online", 00:21:55.637 "raid_level": "raid1", 00:21:55.637 "superblock": true, 00:21:55.637 "num_base_bdevs": 2, 00:21:55.637 "num_base_bdevs_discovered": 2, 00:21:55.637 "num_base_bdevs_operational": 2, 00:21:55.637 "base_bdevs_list": [ 00:21:55.637 { 00:21:55.637 "name": "pt1", 00:21:55.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:55.637 "is_configured": true, 00:21:55.637 "data_offset": 256, 00:21:55.637 "data_size": 7936 00:21:55.637 }, 00:21:55.637 { 00:21:55.637 "name": "pt2", 00:21:55.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:55.637 "is_configured": true, 00:21:55.637 "data_offset": 256, 00:21:55.637 "data_size": 7936 00:21:55.637 } 00:21:55.637 ] 00:21:55.637 }' 00:21:55.637 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:55.637 18:25:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:56.206 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:56.206 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:56.206 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:56.206 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:56.206 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:56.206 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:21:56.206 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:56.206 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:56.206 [2024-07-24 18:25:04.765153] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.206 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:56.206 "name": "raid_bdev1", 00:21:56.206 "aliases": [ 00:21:56.206 "5c2e95f8-b20b-446e-a21c-769bd599878b" 00:21:56.206 ], 00:21:56.206 "product_name": "Raid Volume", 00:21:56.206 "block_size": 4096, 00:21:56.206 "num_blocks": 7936, 00:21:56.206 "uuid": "5c2e95f8-b20b-446e-a21c-769bd599878b", 00:21:56.206 "assigned_rate_limits": { 00:21:56.206 "rw_ios_per_sec": 0, 00:21:56.206 "rw_mbytes_per_sec": 0, 00:21:56.206 "r_mbytes_per_sec": 0, 00:21:56.206 "w_mbytes_per_sec": 0 00:21:56.206 }, 00:21:56.206 "claimed": false, 00:21:56.206 "zoned": false, 00:21:56.206 "supported_io_types": { 00:21:56.206 "read": true, 00:21:56.206 "write": true, 00:21:56.206 "unmap": false, 00:21:56.206 "flush": false, 00:21:56.206 "reset": true, 00:21:56.206 "nvme_admin": false, 00:21:56.206 "nvme_io": false, 00:21:56.206 "nvme_io_md": false, 00:21:56.206 "write_zeroes": true, 00:21:56.206 "zcopy": false, 00:21:56.206 "get_zone_info": false, 00:21:56.206 "zone_management": false, 00:21:56.206 "zone_append": false, 00:21:56.206 "compare": false, 00:21:56.206 "compare_and_write": false, 00:21:56.206 "abort": false, 00:21:56.206 "seek_hole": false, 00:21:56.206 "seek_data": false, 00:21:56.206 "copy": false, 00:21:56.206 "nvme_iov_md": false 00:21:56.206 }, 00:21:56.206 "memory_domains": [ 00:21:56.206 { 00:21:56.206 "dma_device_id": "system", 00:21:56.206 "dma_device_type": 1 00:21:56.206 }, 00:21:56.206 { 00:21:56.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.206 "dma_device_type": 2 00:21:56.206 }, 00:21:56.206 { 00:21:56.206 "dma_device_id": "system", 00:21:56.206 "dma_device_type": 1 00:21:56.206 }, 00:21:56.206 { 00:21:56.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.206 "dma_device_type": 2 00:21:56.206 } 00:21:56.206 ], 00:21:56.206 "driver_specific": { 00:21:56.206 "raid": { 00:21:56.206 "uuid": "5c2e95f8-b20b-446e-a21c-769bd599878b", 00:21:56.206 "strip_size_kb": 0, 00:21:56.206 "state": "online", 00:21:56.206 "raid_level": "raid1", 00:21:56.206 "superblock": true, 00:21:56.206 "num_base_bdevs": 2, 00:21:56.206 "num_base_bdevs_discovered": 2, 00:21:56.206 "num_base_bdevs_operational": 2, 00:21:56.206 "base_bdevs_list": [ 00:21:56.206 { 00:21:56.206 "name": "pt1", 00:21:56.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:56.206 "is_configured": true, 00:21:56.206 "data_offset": 256, 00:21:56.206 "data_size": 7936 00:21:56.206 }, 00:21:56.206 { 00:21:56.206 "name": "pt2", 00:21:56.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.206 "is_configured": true, 00:21:56.206 "data_offset": 256, 00:21:56.206 "data_size": 7936 00:21:56.206 } 00:21:56.206 ] 00:21:56.206 } 00:21:56.206 } 00:21:56.206 }' 00:21:56.206 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:56.466 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:56.466 pt2' 00:21:56.466 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:56.466 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:56.466 18:25:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:56.466 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:56.466 "name": "pt1", 00:21:56.466 "aliases": [ 00:21:56.466 "00000000-0000-0000-0000-000000000001" 00:21:56.466 ], 00:21:56.466 "product_name": "passthru", 00:21:56.466 "block_size": 4096, 00:21:56.466 "num_blocks": 8192, 00:21:56.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:56.466 "assigned_rate_limits": { 00:21:56.466 "rw_ios_per_sec": 0, 00:21:56.466 "rw_mbytes_per_sec": 0, 00:21:56.466 "r_mbytes_per_sec": 0, 00:21:56.466 "w_mbytes_per_sec": 0 00:21:56.466 }, 00:21:56.466 "claimed": true, 00:21:56.466 "claim_type": "exclusive_write", 00:21:56.466 "zoned": false, 00:21:56.466 "supported_io_types": { 00:21:56.466 "read": true, 00:21:56.466 "write": true, 00:21:56.466 "unmap": true, 00:21:56.466 "flush": true, 00:21:56.466 "reset": true, 00:21:56.466 "nvme_admin": false, 00:21:56.466 "nvme_io": false, 00:21:56.466 "nvme_io_md": false, 00:21:56.466 "write_zeroes": true, 00:21:56.466 "zcopy": true, 00:21:56.466 "get_zone_info": false, 00:21:56.466 "zone_management": false, 00:21:56.466 "zone_append": false, 00:21:56.466 "compare": false, 00:21:56.466 "compare_and_write": false, 00:21:56.466 "abort": true, 00:21:56.466 "seek_hole": false, 00:21:56.466 "seek_data": false, 00:21:56.466 "copy": true, 00:21:56.466 "nvme_iov_md": false 00:21:56.466 }, 00:21:56.466 "memory_domains": [ 00:21:56.466 { 00:21:56.466 "dma_device_id": "system", 00:21:56.466 "dma_device_type": 1 00:21:56.466 }, 00:21:56.466 { 00:21:56.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.466 "dma_device_type": 2 00:21:56.466 } 00:21:56.466 ], 00:21:56.466 "driver_specific": { 00:21:56.466 "passthru": { 00:21:56.466 "name": "pt1", 00:21:56.466 "base_bdev_name": "malloc1" 00:21:56.466 } 00:21:56.466 } 00:21:56.466 }' 00:21:56.466 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:56.466 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:56.725 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:56.725 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:56.725 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:56.725 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:56.725 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:56.725 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:56.725 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:56.725 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:56.725 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:56.725 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:56.725 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:56.725 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:56.726 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:56.985 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:56.985 "name": "pt2", 00:21:56.985 "aliases": [ 00:21:56.985 "00000000-0000-0000-0000-000000000002" 00:21:56.985 ], 00:21:56.985 "product_name": "passthru", 00:21:56.985 "block_size": 4096, 00:21:56.985 "num_blocks": 8192, 00:21:56.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.985 "assigned_rate_limits": { 00:21:56.985 "rw_ios_per_sec": 0, 00:21:56.985 "rw_mbytes_per_sec": 0, 00:21:56.985 "r_mbytes_per_sec": 0, 00:21:56.985 "w_mbytes_per_sec": 0 00:21:56.985 }, 00:21:56.985 "claimed": true, 00:21:56.985 "claim_type": "exclusive_write", 00:21:56.985 "zoned": false, 00:21:56.985 "supported_io_types": { 00:21:56.985 "read": true, 00:21:56.985 "write": true, 00:21:56.985 "unmap": true, 00:21:56.985 "flush": true, 00:21:56.985 "reset": true, 00:21:56.985 "nvme_admin": false, 00:21:56.985 "nvme_io": false, 00:21:56.985 "nvme_io_md": false, 00:21:56.985 "write_zeroes": true, 00:21:56.985 "zcopy": true, 00:21:56.985 "get_zone_info": false, 00:21:56.985 "zone_management": false, 00:21:56.985 "zone_append": false, 00:21:56.985 "compare": false, 00:21:56.985 "compare_and_write": false, 00:21:56.985 "abort": true, 00:21:56.985 "seek_hole": false, 00:21:56.985 "seek_data": false, 00:21:56.985 "copy": true, 00:21:56.985 "nvme_iov_md": false 00:21:56.985 }, 00:21:56.985 "memory_domains": [ 00:21:56.985 { 00:21:56.985 "dma_device_id": "system", 00:21:56.985 "dma_device_type": 1 00:21:56.985 }, 00:21:56.985 { 00:21:56.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.985 "dma_device_type": 2 00:21:56.985 } 00:21:56.985 ], 00:21:56.985 "driver_specific": { 00:21:56.985 "passthru": { 00:21:56.985 "name": "pt2", 00:21:56.985 "base_bdev_name": "malloc2" 00:21:56.985 } 00:21:56.985 } 00:21:56.985 }' 00:21:56.985 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:56.985 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:56.985 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:56.985 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:57.245 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:57.245 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:57.245 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:57.245 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:57.245 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:57.245 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:57.245 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:57.245 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:57.245 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:57.245 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:57.504 [2024-07-24 18:25:05.948205] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:57.504 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 5c2e95f8-b20b-446e-a21c-769bd599878b '!=' 5c2e95f8-b20b-446e-a21c-769bd599878b ']' 00:21:57.504 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:21:57.504 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:57.504 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:21:57.504 18:25:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:57.764 [2024-07-24 18:25:06.120515] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.764 "name": "raid_bdev1", 00:21:57.764 "uuid": "5c2e95f8-b20b-446e-a21c-769bd599878b", 00:21:57.764 "strip_size_kb": 0, 00:21:57.764 "state": "online", 00:21:57.764 "raid_level": "raid1", 00:21:57.764 "superblock": true, 00:21:57.764 "num_base_bdevs": 2, 00:21:57.764 "num_base_bdevs_discovered": 1, 00:21:57.764 "num_base_bdevs_operational": 1, 00:21:57.764 "base_bdevs_list": [ 00:21:57.764 { 00:21:57.764 "name": null, 00:21:57.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.764 "is_configured": false, 00:21:57.764 "data_offset": 256, 00:21:57.764 "data_size": 7936 00:21:57.764 }, 00:21:57.764 { 00:21:57.764 "name": "pt2", 00:21:57.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:57.764 "is_configured": true, 00:21:57.764 "data_offset": 256, 00:21:57.764 "data_size": 7936 00:21:57.764 } 00:21:57.764 ] 00:21:57.764 }' 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.764 18:25:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.332 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:58.332 [2024-07-24 18:25:06.922568] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:58.332 [2024-07-24 18:25:06.922590] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:58.332 [2024-07-24 18:25:06.922637] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:58.332 [2024-07-24 18:25:06.922667] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:58.332 [2024-07-24 18:25:06.922674] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13f5cd0 name raid_bdev1, state offline 00:21:58.592 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.592 18:25:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:21:58.592 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:21:58.592 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:21:58.592 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:21:58.592 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:58.592 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:58.852 [2024-07-24 18:25:07.411824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:58.852 [2024-07-24 18:25:07.411856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.852 [2024-07-24 18:25:07.411867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x125d8a0 00:21:58.852 [2024-07-24 18:25:07.411875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.852 [2024-07-24 18:25:07.413022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.852 [2024-07-24 18:25:07.413045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:58.852 [2024-07-24 18:25:07.413094] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:58.852 [2024-07-24 18:25:07.413112] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:58.852 [2024-07-24 18:25:07.413172] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1253ac0 00:21:58.852 [2024-07-24 18:25:07.413178] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:58.852 [2024-07-24 18:25:07.413295] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12551c0 00:21:58.852 [2024-07-24 18:25:07.413372] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1253ac0 00:21:58.852 [2024-07-24 18:25:07.413378] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1253ac0 00:21:58.852 [2024-07-24 18:25:07.413439] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:58.852 pt2 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.852 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.111 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:59.111 "name": "raid_bdev1", 00:21:59.111 "uuid": "5c2e95f8-b20b-446e-a21c-769bd599878b", 00:21:59.111 "strip_size_kb": 0, 00:21:59.112 "state": "online", 00:21:59.112 "raid_level": "raid1", 00:21:59.112 "superblock": true, 00:21:59.112 "num_base_bdevs": 2, 00:21:59.112 "num_base_bdevs_discovered": 1, 00:21:59.112 "num_base_bdevs_operational": 1, 00:21:59.112 "base_bdevs_list": [ 00:21:59.112 { 00:21:59.112 "name": null, 00:21:59.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.112 "is_configured": false, 00:21:59.112 "data_offset": 256, 00:21:59.112 "data_size": 7936 00:21:59.112 }, 00:21:59.112 { 00:21:59.112 "name": "pt2", 00:21:59.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:59.112 "is_configured": true, 00:21:59.112 "data_offset": 256, 00:21:59.112 "data_size": 7936 00:21:59.112 } 00:21:59.112 ] 00:21:59.112 }' 00:21:59.112 18:25:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:59.112 18:25:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.680 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:59.680 [2024-07-24 18:25:08.217906] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:59.680 [2024-07-24 18:25:08.217923] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:59.680 [2024-07-24 18:25:08.217959] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.680 [2024-07-24 18:25:08.217987] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:59.680 [2024-07-24 18:25:08.217995] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1253ac0 name raid_bdev1, state offline 00:21:59.680 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.680 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:21:59.939 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:21:59.939 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:21:59.939 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:21:59.939 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:00.198 [2024-07-24 18:25:08.554765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:00.198 [2024-07-24 18:25:08.554796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.198 [2024-07-24 18:25:08.554806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13ffc60 00:22:00.198 [2024-07-24 18:25:08.554814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.198 [2024-07-24 18:25:08.555932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.198 [2024-07-24 18:25:08.555954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:00.198 [2024-07-24 18:25:08.556002] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:00.198 [2024-07-24 18:25:08.556019] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:00.198 [2024-07-24 18:25:08.556085] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:00.198 [2024-07-24 18:25:08.556093] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:00.198 [2024-07-24 18:25:08.556102] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1254b30 name raid_bdev1, state configuring 00:22:00.198 [2024-07-24 18:25:08.556117] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:00.198 [2024-07-24 18:25:08.556156] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x12569f0 00:22:00.198 [2024-07-24 18:25:08.556162] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:00.198 [2024-07-24 18:25:08.556270] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1253a90 00:22:00.198 [2024-07-24 18:25:08.556351] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12569f0 00:22:00.198 [2024-07-24 18:25:08.556357] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12569f0 00:22:00.198 [2024-07-24 18:25:08.556418] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.198 pt1 00:22:00.198 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:22:00.198 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:00.198 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:00.198 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:00.198 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:00.198 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:00.199 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:00.199 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.199 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.199 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.199 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.199 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.199 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.199 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.199 "name": "raid_bdev1", 00:22:00.199 "uuid": "5c2e95f8-b20b-446e-a21c-769bd599878b", 00:22:00.199 "strip_size_kb": 0, 00:22:00.199 "state": "online", 00:22:00.199 "raid_level": "raid1", 00:22:00.199 "superblock": true, 00:22:00.199 "num_base_bdevs": 2, 00:22:00.199 "num_base_bdevs_discovered": 1, 00:22:00.199 "num_base_bdevs_operational": 1, 00:22:00.199 "base_bdevs_list": [ 00:22:00.199 { 00:22:00.199 "name": null, 00:22:00.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.199 "is_configured": false, 00:22:00.199 "data_offset": 256, 00:22:00.199 "data_size": 7936 00:22:00.199 }, 00:22:00.199 { 00:22:00.199 "name": "pt2", 00:22:00.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:00.199 "is_configured": true, 00:22:00.199 "data_offset": 256, 00:22:00.199 "data_size": 7936 00:22:00.199 } 00:22:00.199 ] 00:22:00.199 }' 00:22:00.199 18:25:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.199 18:25:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.766 18:25:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:22:00.766 18:25:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:22:01.025 [2024-07-24 18:25:09.549490] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 5c2e95f8-b20b-446e-a21c-769bd599878b '!=' 5c2e95f8-b20b-446e-a21c-769bd599878b ']' 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 2293355 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 2293355 ']' 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 2293355 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2293355 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2293355' 00:22:01.025 killing process with pid 2293355 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 2293355 00:22:01.025 [2024-07-24 18:25:09.612062] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:01.025 [2024-07-24 18:25:09.612105] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:01.025 [2024-07-24 18:25:09.612144] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:01.025 [2024-07-24 18:25:09.612153] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12569f0 name raid_bdev1, state offline 00:22:01.025 18:25:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 2293355 00:22:01.283 [2024-07-24 18:25:09.627432] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:01.283 18:25:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:22:01.283 00:22:01.283 real 0m11.675s 00:22:01.283 user 0m21.060s 00:22:01.283 sys 0m2.296s 00:22:01.283 18:25:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:01.283 18:25:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.283 ************************************ 00:22:01.283 END TEST raid_superblock_test_4k 00:22:01.283 ************************************ 00:22:01.283 18:25:09 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:22:01.283 18:25:09 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:22:01.283 18:25:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:01.283 18:25:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:01.283 18:25:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:01.542 ************************************ 00:22:01.542 START TEST raid_rebuild_test_sb_4k 00:22:01.542 ************************************ 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=2295671 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 2295671 /var/tmp/spdk-raid.sock 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 2295671 ']' 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:01.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.542 18:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.542 [2024-07-24 18:25:09.951564] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:22:01.542 [2024-07-24 18:25:09.951611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2295671 ] 00:22:01.542 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:01.542 Zero copy mechanism will not be used. 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:01.0 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:01.1 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:01.2 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:01.3 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:01.4 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:01.5 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:01.6 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:01.7 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:02.0 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:02.1 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:02.2 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:02.3 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:02.4 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:02.5 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:02.6 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b3:02.7 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b5:01.0 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b5:01.1 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b5:01.2 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b5:01.3 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b5:01.4 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b5:01.5 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b5:01.6 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b5:01.7 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b5:02.0 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b5:02.1 cannot be used 00:22:01.542 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.542 EAL: Requested device 0000:b5:02.2 cannot be used 00:22:01.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.543 EAL: Requested device 0000:b5:02.3 cannot be used 00:22:01.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.543 EAL: Requested device 0000:b5:02.4 cannot be used 00:22:01.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.543 EAL: Requested device 0000:b5:02.5 cannot be used 00:22:01.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.543 EAL: Requested device 0000:b5:02.6 cannot be used 00:22:01.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:01.543 EAL: Requested device 0000:b5:02.7 cannot be used 00:22:01.543 [2024-07-24 18:25:10.045984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.543 [2024-07-24 18:25:10.126183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.801 [2024-07-24 18:25:10.179953] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:01.801 [2024-07-24 18:25:10.179979] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:02.367 18:25:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.367 18:25:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:22:02.367 18:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:02.367 18:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:22:02.367 BaseBdev1_malloc 00:22:02.367 18:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:02.626 [2024-07-24 18:25:11.048039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:02.626 [2024-07-24 18:25:11.048073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.626 [2024-07-24 18:25:11.048088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1539370 00:22:02.626 [2024-07-24 18:25:11.048097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.626 [2024-07-24 18:25:11.049171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.626 [2024-07-24 18:25:11.049191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:02.626 BaseBdev1 00:22:02.626 18:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:02.626 18:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:22:02.884 BaseBdev2_malloc 00:22:02.884 18:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:02.884 [2024-07-24 18:25:11.388652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:02.884 [2024-07-24 18:25:11.388685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.884 [2024-07-24 18:25:11.388698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16dce70 00:22:02.884 [2024-07-24 18:25:11.388706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.884 [2024-07-24 18:25:11.389715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.884 [2024-07-24 18:25:11.389737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:02.884 BaseBdev2 00:22:02.884 18:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:22:03.142 spare_malloc 00:22:03.142 18:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:03.142 spare_delay 00:22:03.142 18:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:03.401 [2024-07-24 18:25:11.885538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:03.401 [2024-07-24 18:25:11.885573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.401 [2024-07-24 18:25:11.885587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16dc4b0 00:22:03.401 [2024-07-24 18:25:11.885597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.401 [2024-07-24 18:25:11.886682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.401 [2024-07-24 18:25:11.886713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:03.401 spare 00:22:03.401 18:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:03.660 [2024-07-24 18:25:12.037958] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.660 [2024-07-24 18:25:12.038793] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:03.660 [2024-07-24 18:25:12.038910] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1531030 00:22:03.660 [2024-07-24 18:25:12.038919] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:03.660 [2024-07-24 18:25:12.039045] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16dd100 00:22:03.660 [2024-07-24 18:25:12.039138] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1531030 00:22:03.660 [2024-07-24 18:25:12.039144] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1531030 00:22:03.660 [2024-07-24 18:25:12.039211] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.660 "name": "raid_bdev1", 00:22:03.660 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:03.660 "strip_size_kb": 0, 00:22:03.660 "state": "online", 00:22:03.660 "raid_level": "raid1", 00:22:03.660 "superblock": true, 00:22:03.660 "num_base_bdevs": 2, 00:22:03.660 "num_base_bdevs_discovered": 2, 00:22:03.660 "num_base_bdevs_operational": 2, 00:22:03.660 "base_bdevs_list": [ 00:22:03.660 { 00:22:03.660 "name": "BaseBdev1", 00:22:03.660 "uuid": "99c819d5-a489-5148-8ea0-430cb39acd3f", 00:22:03.660 "is_configured": true, 00:22:03.660 "data_offset": 256, 00:22:03.660 "data_size": 7936 00:22:03.660 }, 00:22:03.660 { 00:22:03.660 "name": "BaseBdev2", 00:22:03.660 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:03.660 "is_configured": true, 00:22:03.660 "data_offset": 256, 00:22:03.660 "data_size": 7936 00:22:03.660 } 00:22:03.660 ] 00:22:03.660 }' 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.660 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:04.227 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:04.227 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:22:04.486 [2024-07-24 18:25:12.860204] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:04.486 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:22:04.486 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.486 18:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:04.486 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:04.745 [2024-07-24 18:25:13.192943] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16dd100 00:22:04.745 /dev/nbd0 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.745 1+0 records in 00:22:04.745 1+0 records out 00:22:04.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265527 s, 15.4 MB/s 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:22:04.745 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:05.319 7936+0 records in 00:22:05.319 7936+0 records out 00:22:05.319 32505856 bytes (33 MB, 31 MiB) copied, 0.589035 s, 55.2 MB/s 00:22:05.319 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:05.319 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:05.319 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:05.319 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:05.319 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:05.319 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:05.319 18:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:05.578 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:05.578 [2024-07-24 18:25:14.041827] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.578 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:05.578 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:05.578 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.578 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.578 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:05.578 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:05.578 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.578 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:05.838 [2024-07-24 18:25:14.202276] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:05.838 "name": "raid_bdev1", 00:22:05.838 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:05.838 "strip_size_kb": 0, 00:22:05.838 "state": "online", 00:22:05.838 "raid_level": "raid1", 00:22:05.838 "superblock": true, 00:22:05.838 "num_base_bdevs": 2, 00:22:05.838 "num_base_bdevs_discovered": 1, 00:22:05.838 "num_base_bdevs_operational": 1, 00:22:05.838 "base_bdevs_list": [ 00:22:05.838 { 00:22:05.838 "name": null, 00:22:05.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.838 "is_configured": false, 00:22:05.838 "data_offset": 256, 00:22:05.838 "data_size": 7936 00:22:05.838 }, 00:22:05.838 { 00:22:05.838 "name": "BaseBdev2", 00:22:05.838 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:05.838 "is_configured": true, 00:22:05.838 "data_offset": 256, 00:22:05.838 "data_size": 7936 00:22:05.838 } 00:22:05.838 ] 00:22:05.838 }' 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:05.838 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.406 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:06.406 [2024-07-24 18:25:14.984288] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:06.406 [2024-07-24 18:25:14.988600] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16dd100 00:22:06.406 [2024-07-24 18:25:14.990137] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:06.406 18:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:07.798 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:07.799 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:07.799 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:07.799 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:07.799 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:07.799 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.799 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.799 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:07.799 "name": "raid_bdev1", 00:22:07.799 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:07.799 "strip_size_kb": 0, 00:22:07.799 "state": "online", 00:22:07.799 "raid_level": "raid1", 00:22:07.799 "superblock": true, 00:22:07.799 "num_base_bdevs": 2, 00:22:07.799 "num_base_bdevs_discovered": 2, 00:22:07.799 "num_base_bdevs_operational": 2, 00:22:07.799 "process": { 00:22:07.799 "type": "rebuild", 00:22:07.799 "target": "spare", 00:22:07.799 "progress": { 00:22:07.799 "blocks": 2816, 00:22:07.799 "percent": 35 00:22:07.799 } 00:22:07.799 }, 00:22:07.799 "base_bdevs_list": [ 00:22:07.799 { 00:22:07.799 "name": "spare", 00:22:07.799 "uuid": "fb7ed51b-a109-501e-a6f2-f7ef81e510dd", 00:22:07.799 "is_configured": true, 00:22:07.799 "data_offset": 256, 00:22:07.799 "data_size": 7936 00:22:07.799 }, 00:22:07.799 { 00:22:07.799 "name": "BaseBdev2", 00:22:07.799 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:07.799 "is_configured": true, 00:22:07.799 "data_offset": 256, 00:22:07.799 "data_size": 7936 00:22:07.799 } 00:22:07.799 ] 00:22:07.799 }' 00:22:07.799 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:07.799 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:07.799 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:07.799 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:07.799 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:08.072 [2024-07-24 18:25:16.413075] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:08.072 [2024-07-24 18:25:16.500540] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:08.072 [2024-07-24 18:25:16.500573] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.072 [2024-07-24 18:25:16.500583] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:08.072 [2024-07-24 18:25:16.500589] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:08.072 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:08.072 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:08.072 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:08.072 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:08.072 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:08.072 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:08.072 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.072 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.072 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.072 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.072 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.072 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.331 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.331 "name": "raid_bdev1", 00:22:08.331 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:08.331 "strip_size_kb": 0, 00:22:08.331 "state": "online", 00:22:08.331 "raid_level": "raid1", 00:22:08.331 "superblock": true, 00:22:08.331 "num_base_bdevs": 2, 00:22:08.331 "num_base_bdevs_discovered": 1, 00:22:08.331 "num_base_bdevs_operational": 1, 00:22:08.331 "base_bdevs_list": [ 00:22:08.331 { 00:22:08.331 "name": null, 00:22:08.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.331 "is_configured": false, 00:22:08.331 "data_offset": 256, 00:22:08.331 "data_size": 7936 00:22:08.331 }, 00:22:08.331 { 00:22:08.331 "name": "BaseBdev2", 00:22:08.331 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:08.331 "is_configured": true, 00:22:08.331 "data_offset": 256, 00:22:08.331 "data_size": 7936 00:22:08.331 } 00:22:08.331 ] 00:22:08.331 }' 00:22:08.331 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.331 18:25:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:08.590 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:08.590 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:08.590 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:08.590 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:08.590 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:08.590 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.590 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.850 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:08.850 "name": "raid_bdev1", 00:22:08.850 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:08.850 "strip_size_kb": 0, 00:22:08.850 "state": "online", 00:22:08.850 "raid_level": "raid1", 00:22:08.850 "superblock": true, 00:22:08.850 "num_base_bdevs": 2, 00:22:08.850 "num_base_bdevs_discovered": 1, 00:22:08.850 "num_base_bdevs_operational": 1, 00:22:08.850 "base_bdevs_list": [ 00:22:08.850 { 00:22:08.850 "name": null, 00:22:08.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.850 "is_configured": false, 00:22:08.850 "data_offset": 256, 00:22:08.850 "data_size": 7936 00:22:08.850 }, 00:22:08.850 { 00:22:08.850 "name": "BaseBdev2", 00:22:08.850 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:08.850 "is_configured": true, 00:22:08.850 "data_offset": 256, 00:22:08.850 "data_size": 7936 00:22:08.850 } 00:22:08.850 ] 00:22:08.850 }' 00:22:08.850 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:08.850 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:08.850 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:08.850 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:08.850 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:09.109 [2024-07-24 18:25:17.563422] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:09.109 [2024-07-24 18:25:17.567826] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16d1630 00:22:09.109 [2024-07-24 18:25:17.568875] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:09.109 18:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:10.047 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.047 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:10.047 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:10.047 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:10.047 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:10.047 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.047 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:10.307 "name": "raid_bdev1", 00:22:10.307 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:10.307 "strip_size_kb": 0, 00:22:10.307 "state": "online", 00:22:10.307 "raid_level": "raid1", 00:22:10.307 "superblock": true, 00:22:10.307 "num_base_bdevs": 2, 00:22:10.307 "num_base_bdevs_discovered": 2, 00:22:10.307 "num_base_bdevs_operational": 2, 00:22:10.307 "process": { 00:22:10.307 "type": "rebuild", 00:22:10.307 "target": "spare", 00:22:10.307 "progress": { 00:22:10.307 "blocks": 2816, 00:22:10.307 "percent": 35 00:22:10.307 } 00:22:10.307 }, 00:22:10.307 "base_bdevs_list": [ 00:22:10.307 { 00:22:10.307 "name": "spare", 00:22:10.307 "uuid": "fb7ed51b-a109-501e-a6f2-f7ef81e510dd", 00:22:10.307 "is_configured": true, 00:22:10.307 "data_offset": 256, 00:22:10.307 "data_size": 7936 00:22:10.307 }, 00:22:10.307 { 00:22:10.307 "name": "BaseBdev2", 00:22:10.307 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:10.307 "is_configured": true, 00:22:10.307 "data_offset": 256, 00:22:10.307 "data_size": 7936 00:22:10.307 } 00:22:10.307 ] 00:22:10.307 }' 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:22:10.307 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=782 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.307 18:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.566 18:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:10.566 "name": "raid_bdev1", 00:22:10.566 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:10.566 "strip_size_kb": 0, 00:22:10.566 "state": "online", 00:22:10.566 "raid_level": "raid1", 00:22:10.566 "superblock": true, 00:22:10.566 "num_base_bdevs": 2, 00:22:10.566 "num_base_bdevs_discovered": 2, 00:22:10.566 "num_base_bdevs_operational": 2, 00:22:10.566 "process": { 00:22:10.566 "type": "rebuild", 00:22:10.566 "target": "spare", 00:22:10.566 "progress": { 00:22:10.566 "blocks": 3584, 00:22:10.566 "percent": 45 00:22:10.566 } 00:22:10.566 }, 00:22:10.566 "base_bdevs_list": [ 00:22:10.566 { 00:22:10.566 "name": "spare", 00:22:10.566 "uuid": "fb7ed51b-a109-501e-a6f2-f7ef81e510dd", 00:22:10.566 "is_configured": true, 00:22:10.566 "data_offset": 256, 00:22:10.566 "data_size": 7936 00:22:10.566 }, 00:22:10.566 { 00:22:10.566 "name": "BaseBdev2", 00:22:10.566 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:10.566 "is_configured": true, 00:22:10.566 "data_offset": 256, 00:22:10.566 "data_size": 7936 00:22:10.566 } 00:22:10.566 ] 00:22:10.566 }' 00:22:10.566 18:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:10.566 18:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.566 18:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:10.566 18:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.566 18:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:11.504 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:11.504 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.504 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:11.504 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:11.504 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:11.504 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:11.504 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.504 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.763 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:11.763 "name": "raid_bdev1", 00:22:11.763 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:11.763 "strip_size_kb": 0, 00:22:11.763 "state": "online", 00:22:11.763 "raid_level": "raid1", 00:22:11.763 "superblock": true, 00:22:11.763 "num_base_bdevs": 2, 00:22:11.763 "num_base_bdevs_discovered": 2, 00:22:11.763 "num_base_bdevs_operational": 2, 00:22:11.763 "process": { 00:22:11.763 "type": "rebuild", 00:22:11.763 "target": "spare", 00:22:11.763 "progress": { 00:22:11.763 "blocks": 6656, 00:22:11.763 "percent": 83 00:22:11.763 } 00:22:11.763 }, 00:22:11.763 "base_bdevs_list": [ 00:22:11.763 { 00:22:11.763 "name": "spare", 00:22:11.763 "uuid": "fb7ed51b-a109-501e-a6f2-f7ef81e510dd", 00:22:11.763 "is_configured": true, 00:22:11.763 "data_offset": 256, 00:22:11.763 "data_size": 7936 00:22:11.763 }, 00:22:11.763 { 00:22:11.763 "name": "BaseBdev2", 00:22:11.763 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:11.763 "is_configured": true, 00:22:11.763 "data_offset": 256, 00:22:11.763 "data_size": 7936 00:22:11.763 } 00:22:11.763 ] 00:22:11.763 }' 00:22:11.763 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:11.763 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.763 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:11.763 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.763 18:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:12.331 [2024-07-24 18:25:20.690226] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:12.331 [2024-07-24 18:25:20.690264] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:12.331 [2024-07-24 18:25:20.690337] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.899 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:12.899 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.899 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:12.899 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:12.899 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:12.899 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:12.899 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.899 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.158 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:13.158 "name": "raid_bdev1", 00:22:13.158 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:13.158 "strip_size_kb": 0, 00:22:13.158 "state": "online", 00:22:13.158 "raid_level": "raid1", 00:22:13.158 "superblock": true, 00:22:13.158 "num_base_bdevs": 2, 00:22:13.158 "num_base_bdevs_discovered": 2, 00:22:13.158 "num_base_bdevs_operational": 2, 00:22:13.158 "base_bdevs_list": [ 00:22:13.158 { 00:22:13.158 "name": "spare", 00:22:13.158 "uuid": "fb7ed51b-a109-501e-a6f2-f7ef81e510dd", 00:22:13.158 "is_configured": true, 00:22:13.158 "data_offset": 256, 00:22:13.158 "data_size": 7936 00:22:13.158 }, 00:22:13.158 { 00:22:13.158 "name": "BaseBdev2", 00:22:13.158 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:13.158 "is_configured": true, 00:22:13.158 "data_offset": 256, 00:22:13.158 "data_size": 7936 00:22:13.158 } 00:22:13.158 ] 00:22:13.158 }' 00:22:13.158 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:13.158 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:13.158 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:13.158 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:13.158 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:22:13.158 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:13.158 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:13.158 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:13.158 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:13.158 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:13.158 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.158 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:13.417 "name": "raid_bdev1", 00:22:13.417 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:13.417 "strip_size_kb": 0, 00:22:13.417 "state": "online", 00:22:13.417 "raid_level": "raid1", 00:22:13.417 "superblock": true, 00:22:13.417 "num_base_bdevs": 2, 00:22:13.417 "num_base_bdevs_discovered": 2, 00:22:13.417 "num_base_bdevs_operational": 2, 00:22:13.417 "base_bdevs_list": [ 00:22:13.417 { 00:22:13.417 "name": "spare", 00:22:13.417 "uuid": "fb7ed51b-a109-501e-a6f2-f7ef81e510dd", 00:22:13.417 "is_configured": true, 00:22:13.417 "data_offset": 256, 00:22:13.417 "data_size": 7936 00:22:13.417 }, 00:22:13.417 { 00:22:13.417 "name": "BaseBdev2", 00:22:13.417 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:13.417 "is_configured": true, 00:22:13.417 "data_offset": 256, 00:22:13.417 "data_size": 7936 00:22:13.417 } 00:22:13.417 ] 00:22:13.417 }' 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.417 18:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.676 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.676 "name": "raid_bdev1", 00:22:13.676 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:13.676 "strip_size_kb": 0, 00:22:13.676 "state": "online", 00:22:13.676 "raid_level": "raid1", 00:22:13.676 "superblock": true, 00:22:13.676 "num_base_bdevs": 2, 00:22:13.676 "num_base_bdevs_discovered": 2, 00:22:13.676 "num_base_bdevs_operational": 2, 00:22:13.676 "base_bdevs_list": [ 00:22:13.676 { 00:22:13.676 "name": "spare", 00:22:13.676 "uuid": "fb7ed51b-a109-501e-a6f2-f7ef81e510dd", 00:22:13.676 "is_configured": true, 00:22:13.676 "data_offset": 256, 00:22:13.676 "data_size": 7936 00:22:13.676 }, 00:22:13.676 { 00:22:13.676 "name": "BaseBdev2", 00:22:13.676 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:13.676 "is_configured": true, 00:22:13.676 "data_offset": 256, 00:22:13.676 "data_size": 7936 00:22:13.676 } 00:22:13.676 ] 00:22:13.676 }' 00:22:13.676 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.676 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:14.244 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:14.244 [2024-07-24 18:25:22.687205] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:14.244 [2024-07-24 18:25:22.687225] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.244 [2024-07-24 18:25:22.687270] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.244 [2024-07-24 18:25:22.687310] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.244 [2024-07-24 18:25:22.687318] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1531030 name raid_bdev1, state offline 00:22:14.244 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.244 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:22:14.503 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:14.503 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:14.503 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:14.503 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:14.503 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:14.503 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:14.503 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:14.503 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:14.503 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:14.503 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:14.503 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:14.503 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:14.503 18:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:14.503 /dev/nbd0 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:14.503 1+0 records in 00:22:14.503 1+0 records out 00:22:14.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152876 s, 26.8 MB/s 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:14.503 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:14.763 /dev/nbd1 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:14.763 1+0 records in 00:22:14.763 1+0 records out 00:22:14.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265679 s, 15.4 MB/s 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:14.763 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:15.022 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:15.022 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:15.022 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:15.022 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:15.022 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:15.022 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:15.022 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:15.022 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:15.022 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:15.022 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:15.280 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:15.280 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:15.280 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:15.280 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:15.280 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:15.280 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:15.280 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:15.280 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:15.280 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:22:15.280 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:15.539 18:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:15.539 [2024-07-24 18:25:24.071638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:15.539 [2024-07-24 18:25:24.071673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.539 [2024-07-24 18:25:24.071688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15312b0 00:22:15.539 [2024-07-24 18:25:24.071696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.539 [2024-07-24 18:25:24.072857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.539 [2024-07-24 18:25:24.072879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:15.539 [2024-07-24 18:25:24.072931] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:15.539 [2024-07-24 18:25:24.072949] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:15.539 [2024-07-24 18:25:24.073016] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:15.539 spare 00:22:15.539 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:15.539 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:15.539 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:15.539 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:15.539 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:15.539 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:15.539 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:15.539 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:15.539 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:15.539 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:15.539 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.539 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.797 [2024-07-24 18:25:24.173308] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x152fca0 00:22:15.797 [2024-07-24 18:25:24.173320] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:15.797 [2024-07-24 18:25:24.173441] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15304d0 00:22:15.797 [2024-07-24 18:25:24.173539] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x152fca0 00:22:15.797 [2024-07-24 18:25:24.173546] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x152fca0 00:22:15.797 [2024-07-24 18:25:24.173612] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.797 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:15.797 "name": "raid_bdev1", 00:22:15.797 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:15.797 "strip_size_kb": 0, 00:22:15.797 "state": "online", 00:22:15.797 "raid_level": "raid1", 00:22:15.797 "superblock": true, 00:22:15.797 "num_base_bdevs": 2, 00:22:15.797 "num_base_bdevs_discovered": 2, 00:22:15.797 "num_base_bdevs_operational": 2, 00:22:15.797 "base_bdevs_list": [ 00:22:15.797 { 00:22:15.797 "name": "spare", 00:22:15.797 "uuid": "fb7ed51b-a109-501e-a6f2-f7ef81e510dd", 00:22:15.797 "is_configured": true, 00:22:15.797 "data_offset": 256, 00:22:15.797 "data_size": 7936 00:22:15.797 }, 00:22:15.797 { 00:22:15.797 "name": "BaseBdev2", 00:22:15.797 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:15.797 "is_configured": true, 00:22:15.797 "data_offset": 256, 00:22:15.797 "data_size": 7936 00:22:15.797 } 00:22:15.797 ] 00:22:15.797 }' 00:22:15.797 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:15.797 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:16.364 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:16.364 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:16.364 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:16.364 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:16.364 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:16.364 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.364 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.364 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:16.364 "name": "raid_bdev1", 00:22:16.364 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:16.364 "strip_size_kb": 0, 00:22:16.364 "state": "online", 00:22:16.364 "raid_level": "raid1", 00:22:16.364 "superblock": true, 00:22:16.364 "num_base_bdevs": 2, 00:22:16.364 "num_base_bdevs_discovered": 2, 00:22:16.364 "num_base_bdevs_operational": 2, 00:22:16.364 "base_bdevs_list": [ 00:22:16.364 { 00:22:16.364 "name": "spare", 00:22:16.364 "uuid": "fb7ed51b-a109-501e-a6f2-f7ef81e510dd", 00:22:16.364 "is_configured": true, 00:22:16.364 "data_offset": 256, 00:22:16.364 "data_size": 7936 00:22:16.364 }, 00:22:16.364 { 00:22:16.364 "name": "BaseBdev2", 00:22:16.364 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:16.364 "is_configured": true, 00:22:16.364 "data_offset": 256, 00:22:16.364 "data_size": 7936 00:22:16.364 } 00:22:16.364 ] 00:22:16.364 }' 00:22:16.364 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:16.622 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:16.622 18:25:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:16.622 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:16.622 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.622 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:16.622 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:22:16.622 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:16.881 [2024-07-24 18:25:25.338967] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:16.881 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:16.881 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:16.881 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:16.881 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:16.881 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:16.881 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:16.881 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:16.881 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:16.881 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:16.881 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:16.881 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.881 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.140 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.140 "name": "raid_bdev1", 00:22:17.140 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:17.140 "strip_size_kb": 0, 00:22:17.140 "state": "online", 00:22:17.140 "raid_level": "raid1", 00:22:17.140 "superblock": true, 00:22:17.140 "num_base_bdevs": 2, 00:22:17.140 "num_base_bdevs_discovered": 1, 00:22:17.140 "num_base_bdevs_operational": 1, 00:22:17.140 "base_bdevs_list": [ 00:22:17.140 { 00:22:17.140 "name": null, 00:22:17.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.140 "is_configured": false, 00:22:17.140 "data_offset": 256, 00:22:17.140 "data_size": 7936 00:22:17.140 }, 00:22:17.140 { 00:22:17.140 "name": "BaseBdev2", 00:22:17.140 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:17.140 "is_configured": true, 00:22:17.140 "data_offset": 256, 00:22:17.140 "data_size": 7936 00:22:17.140 } 00:22:17.140 ] 00:22:17.140 }' 00:22:17.140 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.140 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.707 18:25:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:17.707 [2024-07-24 18:25:26.141042] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:17.707 [2024-07-24 18:25:26.141150] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:17.707 [2024-07-24 18:25:26.141161] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:17.707 [2024-07-24 18:25:26.141180] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:17.707 [2024-07-24 18:25:26.145437] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15338b0 00:22:17.707 [2024-07-24 18:25:26.146997] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:17.707 18:25:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:22:18.643 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:18.643 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:18.643 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:18.643 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:18.643 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:18.643 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.643 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.902 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:18.902 "name": "raid_bdev1", 00:22:18.902 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:18.902 "strip_size_kb": 0, 00:22:18.902 "state": "online", 00:22:18.902 "raid_level": "raid1", 00:22:18.902 "superblock": true, 00:22:18.902 "num_base_bdevs": 2, 00:22:18.902 "num_base_bdevs_discovered": 2, 00:22:18.902 "num_base_bdevs_operational": 2, 00:22:18.902 "process": { 00:22:18.902 "type": "rebuild", 00:22:18.902 "target": "spare", 00:22:18.902 "progress": { 00:22:18.902 "blocks": 2816, 00:22:18.902 "percent": 35 00:22:18.902 } 00:22:18.902 }, 00:22:18.902 "base_bdevs_list": [ 00:22:18.902 { 00:22:18.902 "name": "spare", 00:22:18.902 "uuid": "fb7ed51b-a109-501e-a6f2-f7ef81e510dd", 00:22:18.902 "is_configured": true, 00:22:18.902 "data_offset": 256, 00:22:18.902 "data_size": 7936 00:22:18.902 }, 00:22:18.902 { 00:22:18.902 "name": "BaseBdev2", 00:22:18.902 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:18.902 "is_configured": true, 00:22:18.902 "data_offset": 256, 00:22:18.902 "data_size": 7936 00:22:18.902 } 00:22:18.902 ] 00:22:18.902 }' 00:22:18.902 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:18.902 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:18.902 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:18.902 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:18.902 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:19.161 [2024-07-24 18:25:27.549543] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:19.161 [2024-07-24 18:25:27.556733] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:19.161 [2024-07-24 18:25:27.556762] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.161 [2024-07-24 18:25:27.556772] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:19.161 [2024-07-24 18:25:27.556793] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:19.161 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:19.161 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:19.161 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:19.161 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:19.161 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:19.161 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:19.161 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.161 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.161 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.161 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.161 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.161 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.420 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:19.420 "name": "raid_bdev1", 00:22:19.420 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:19.420 "strip_size_kb": 0, 00:22:19.420 "state": "online", 00:22:19.420 "raid_level": "raid1", 00:22:19.420 "superblock": true, 00:22:19.420 "num_base_bdevs": 2, 00:22:19.420 "num_base_bdevs_discovered": 1, 00:22:19.420 "num_base_bdevs_operational": 1, 00:22:19.420 "base_bdevs_list": [ 00:22:19.420 { 00:22:19.420 "name": null, 00:22:19.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.420 "is_configured": false, 00:22:19.420 "data_offset": 256, 00:22:19.420 "data_size": 7936 00:22:19.420 }, 00:22:19.420 { 00:22:19.420 "name": "BaseBdev2", 00:22:19.420 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:19.420 "is_configured": true, 00:22:19.420 "data_offset": 256, 00:22:19.420 "data_size": 7936 00:22:19.420 } 00:22:19.420 ] 00:22:19.420 }' 00:22:19.420 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:19.420 18:25:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.679 18:25:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:19.938 [2024-07-24 18:25:28.409957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:19.938 [2024-07-24 18:25:28.409995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.938 [2024-07-24 18:25:28.410026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15321a0 00:22:19.938 [2024-07-24 18:25:28.410034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.938 [2024-07-24 18:25:28.410309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.938 [2024-07-24 18:25:28.410321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:19.938 [2024-07-24 18:25:28.410383] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:19.938 [2024-07-24 18:25:28.410390] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:19.938 [2024-07-24 18:25:28.410397] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:19.938 [2024-07-24 18:25:28.410410] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:19.938 [2024-07-24 18:25:28.414706] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15338b0 00:22:19.938 spare 00:22:19.938 [2024-07-24 18:25:28.415809] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:19.938 18:25:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:22:20.874 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:20.874 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:20.874 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:20.874 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:20.874 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:20.874 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.874 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.133 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:21.133 "name": "raid_bdev1", 00:22:21.133 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:21.133 "strip_size_kb": 0, 00:22:21.133 "state": "online", 00:22:21.133 "raid_level": "raid1", 00:22:21.133 "superblock": true, 00:22:21.133 "num_base_bdevs": 2, 00:22:21.133 "num_base_bdevs_discovered": 2, 00:22:21.133 "num_base_bdevs_operational": 2, 00:22:21.133 "process": { 00:22:21.133 "type": "rebuild", 00:22:21.133 "target": "spare", 00:22:21.133 "progress": { 00:22:21.133 "blocks": 2816, 00:22:21.133 "percent": 35 00:22:21.133 } 00:22:21.133 }, 00:22:21.133 "base_bdevs_list": [ 00:22:21.133 { 00:22:21.133 "name": "spare", 00:22:21.133 "uuid": "fb7ed51b-a109-501e-a6f2-f7ef81e510dd", 00:22:21.133 "is_configured": true, 00:22:21.133 "data_offset": 256, 00:22:21.133 "data_size": 7936 00:22:21.133 }, 00:22:21.133 { 00:22:21.133 "name": "BaseBdev2", 00:22:21.133 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:21.133 "is_configured": true, 00:22:21.133 "data_offset": 256, 00:22:21.133 "data_size": 7936 00:22:21.133 } 00:22:21.133 ] 00:22:21.133 }' 00:22:21.133 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:21.133 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:21.133 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:21.133 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:21.133 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:21.392 [2024-07-24 18:25:29.850348] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:21.392 [2024-07-24 18:25:29.926154] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:21.392 [2024-07-24 18:25:29.926183] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.392 [2024-07-24 18:25:29.926192] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:21.392 [2024-07-24 18:25:29.926213] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:21.392 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:21.392 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:21.392 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:21.392 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:21.392 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:21.392 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:21.392 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.392 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.392 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.392 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.392 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.392 18:25:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.651 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:21.651 "name": "raid_bdev1", 00:22:21.651 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:21.651 "strip_size_kb": 0, 00:22:21.651 "state": "online", 00:22:21.651 "raid_level": "raid1", 00:22:21.651 "superblock": true, 00:22:21.651 "num_base_bdevs": 2, 00:22:21.651 "num_base_bdevs_discovered": 1, 00:22:21.651 "num_base_bdevs_operational": 1, 00:22:21.651 "base_bdevs_list": [ 00:22:21.651 { 00:22:21.651 "name": null, 00:22:21.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.651 "is_configured": false, 00:22:21.651 "data_offset": 256, 00:22:21.651 "data_size": 7936 00:22:21.651 }, 00:22:21.651 { 00:22:21.651 "name": "BaseBdev2", 00:22:21.651 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:21.651 "is_configured": true, 00:22:21.651 "data_offset": 256, 00:22:21.651 "data_size": 7936 00:22:21.651 } 00:22:21.651 ] 00:22:21.651 }' 00:22:21.651 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:21.651 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:22.218 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:22.218 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:22.218 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:22.218 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:22.218 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:22.218 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.218 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.218 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:22.218 "name": "raid_bdev1", 00:22:22.218 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:22.218 "strip_size_kb": 0, 00:22:22.218 "state": "online", 00:22:22.218 "raid_level": "raid1", 00:22:22.218 "superblock": true, 00:22:22.218 "num_base_bdevs": 2, 00:22:22.218 "num_base_bdevs_discovered": 1, 00:22:22.218 "num_base_bdevs_operational": 1, 00:22:22.218 "base_bdevs_list": [ 00:22:22.218 { 00:22:22.218 "name": null, 00:22:22.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.218 "is_configured": false, 00:22:22.218 "data_offset": 256, 00:22:22.218 "data_size": 7936 00:22:22.218 }, 00:22:22.218 { 00:22:22.218 "name": "BaseBdev2", 00:22:22.218 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:22.218 "is_configured": true, 00:22:22.218 "data_offset": 256, 00:22:22.218 "data_size": 7936 00:22:22.218 } 00:22:22.218 ] 00:22:22.218 }' 00:22:22.218 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:22.515 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:22.515 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:22.515 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:22.515 18:25:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:22.515 18:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:22.774 [2024-07-24 18:25:31.189468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:22.774 [2024-07-24 18:25:31.189504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.774 [2024-07-24 18:25:31.189520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15d0460 00:22:22.774 [2024-07-24 18:25:31.189529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.774 [2024-07-24 18:25:31.189787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.774 [2024-07-24 18:25:31.189799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:22.774 [2024-07-24 18:25:31.189845] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:22.774 [2024-07-24 18:25:31.189853] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:22.774 [2024-07-24 18:25:31.189861] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:22.774 BaseBdev1 00:22:22.774 18:25:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:22:23.709 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:23.709 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:23.709 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:23.709 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:23.709 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:23.709 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:23.709 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.709 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.709 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.709 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.709 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.709 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.967 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.967 "name": "raid_bdev1", 00:22:23.967 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:23.967 "strip_size_kb": 0, 00:22:23.967 "state": "online", 00:22:23.967 "raid_level": "raid1", 00:22:23.967 "superblock": true, 00:22:23.967 "num_base_bdevs": 2, 00:22:23.967 "num_base_bdevs_discovered": 1, 00:22:23.967 "num_base_bdevs_operational": 1, 00:22:23.967 "base_bdevs_list": [ 00:22:23.967 { 00:22:23.967 "name": null, 00:22:23.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.967 "is_configured": false, 00:22:23.967 "data_offset": 256, 00:22:23.967 "data_size": 7936 00:22:23.967 }, 00:22:23.967 { 00:22:23.967 "name": "BaseBdev2", 00:22:23.967 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:23.967 "is_configured": true, 00:22:23.967 "data_offset": 256, 00:22:23.967 "data_size": 7936 00:22:23.967 } 00:22:23.967 ] 00:22:23.967 }' 00:22:23.967 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.967 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.534 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:24.534 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:24.534 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:24.534 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:24.534 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:24.534 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.534 18:25:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.534 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:24.534 "name": "raid_bdev1", 00:22:24.534 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:24.534 "strip_size_kb": 0, 00:22:24.534 "state": "online", 00:22:24.534 "raid_level": "raid1", 00:22:24.534 "superblock": true, 00:22:24.534 "num_base_bdevs": 2, 00:22:24.534 "num_base_bdevs_discovered": 1, 00:22:24.534 "num_base_bdevs_operational": 1, 00:22:24.534 "base_bdevs_list": [ 00:22:24.534 { 00:22:24.535 "name": null, 00:22:24.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.535 "is_configured": false, 00:22:24.535 "data_offset": 256, 00:22:24.535 "data_size": 7936 00:22:24.535 }, 00:22:24.535 { 00:22:24.535 "name": "BaseBdev2", 00:22:24.535 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:24.535 "is_configured": true, 00:22:24.535 "data_offset": 256, 00:22:24.535 "data_size": 7936 00:22:24.535 } 00:22:24.535 ] 00:22:24.535 }' 00:22:24.535 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:24.794 [2024-07-24 18:25:33.339042] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:24.794 [2024-07-24 18:25:33.339136] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:24.794 [2024-07-24 18:25:33.339145] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:24.794 request: 00:22:24.794 { 00:22:24.794 "base_bdev": "BaseBdev1", 00:22:24.794 "raid_bdev": "raid_bdev1", 00:22:24.794 "method": "bdev_raid_add_base_bdev", 00:22:24.794 "req_id": 1 00:22:24.794 } 00:22:24.794 Got JSON-RPC error response 00:22:24.794 response: 00:22:24.794 { 00:22:24.794 "code": -22, 00:22:24.794 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:24.794 } 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:24.794 18:25:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:26.169 "name": "raid_bdev1", 00:22:26.169 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:26.169 "strip_size_kb": 0, 00:22:26.169 "state": "online", 00:22:26.169 "raid_level": "raid1", 00:22:26.169 "superblock": true, 00:22:26.169 "num_base_bdevs": 2, 00:22:26.169 "num_base_bdevs_discovered": 1, 00:22:26.169 "num_base_bdevs_operational": 1, 00:22:26.169 "base_bdevs_list": [ 00:22:26.169 { 00:22:26.169 "name": null, 00:22:26.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.169 "is_configured": false, 00:22:26.169 "data_offset": 256, 00:22:26.169 "data_size": 7936 00:22:26.169 }, 00:22:26.169 { 00:22:26.169 "name": "BaseBdev2", 00:22:26.169 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:26.169 "is_configured": true, 00:22:26.169 "data_offset": 256, 00:22:26.169 "data_size": 7936 00:22:26.169 } 00:22:26.169 ] 00:22:26.169 }' 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:26.169 18:25:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.428 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:26.428 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:26.428 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:26.428 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:26.428 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:26.428 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.428 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.687 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:26.687 "name": "raid_bdev1", 00:22:26.687 "uuid": "f09a60f8-a26e-4271-83db-15793f257f2d", 00:22:26.687 "strip_size_kb": 0, 00:22:26.687 "state": "online", 00:22:26.687 "raid_level": "raid1", 00:22:26.687 "superblock": true, 00:22:26.687 "num_base_bdevs": 2, 00:22:26.687 "num_base_bdevs_discovered": 1, 00:22:26.687 "num_base_bdevs_operational": 1, 00:22:26.687 "base_bdevs_list": [ 00:22:26.687 { 00:22:26.687 "name": null, 00:22:26.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.687 "is_configured": false, 00:22:26.687 "data_offset": 256, 00:22:26.687 "data_size": 7936 00:22:26.687 }, 00:22:26.687 { 00:22:26.687 "name": "BaseBdev2", 00:22:26.687 "uuid": "04dce6dc-7d6c-518d-8f3f-0e632c8bdcbd", 00:22:26.687 "is_configured": true, 00:22:26.687 "data_offset": 256, 00:22:26.687 "data_size": 7936 00:22:26.687 } 00:22:26.687 ] 00:22:26.687 }' 00:22:26.687 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:26.687 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:26.688 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:26.688 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:26.688 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 2295671 00:22:26.688 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 2295671 ']' 00:22:26.688 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 2295671 00:22:26.688 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:22:26.946 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.946 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2295671 00:22:26.946 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:26.946 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:26.946 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2295671' 00:22:26.946 killing process with pid 2295671 00:22:26.946 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 2295671 00:22:26.946 Received shutdown signal, test time was about 60.000000 seconds 00:22:26.946 00:22:26.946 Latency(us) 00:22:26.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.946 =================================================================================================================== 00:22:26.946 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:26.946 [2024-07-24 18:25:35.333587] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:26.946 [2024-07-24 18:25:35.333660] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:26.946 [2024-07-24 18:25:35.333690] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:26.946 [2024-07-24 18:25:35.333698] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x152fca0 name raid_bdev1, state offline 00:22:26.946 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 2295671 00:22:26.946 [2024-07-24 18:25:35.357416] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:26.946 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:22:26.946 00:22:26.946 real 0m25.639s 00:22:26.946 user 0m38.476s 00:22:26.946 sys 0m4.137s 00:22:26.946 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:26.946 18:25:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.946 ************************************ 00:22:26.946 END TEST raid_rebuild_test_sb_4k 00:22:26.947 ************************************ 00:22:27.206 18:25:35 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:22:27.206 18:25:35 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:22:27.206 18:25:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:27.206 18:25:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:27.206 18:25:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:27.206 ************************************ 00:22:27.206 START TEST raid_state_function_test_sb_md_separate 00:22:27.206 ************************************ 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=2300422 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2300422' 00:22:27.206 Process raid pid: 2300422 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 2300422 /var/tmp/spdk-raid.sock 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 2300422 ']' 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:27.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.206 18:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:27.206 [2024-07-24 18:25:35.661599] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:22:27.206 [2024-07-24 18:25:35.661651] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:01.0 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:01.1 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:01.2 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:01.3 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:01.4 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:01.5 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:01.6 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:01.7 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:02.0 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:02.1 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:02.2 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:02.3 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:02.4 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:02.5 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:02.6 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b3:02.7 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b5:01.0 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b5:01.1 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b5:01.2 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b5:01.3 cannot be used 00:22:27.206 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.206 EAL: Requested device 0000:b5:01.4 cannot be used 00:22:27.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.207 EAL: Requested device 0000:b5:01.5 cannot be used 00:22:27.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.207 EAL: Requested device 0000:b5:01.6 cannot be used 00:22:27.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.207 EAL: Requested device 0000:b5:01.7 cannot be used 00:22:27.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.207 EAL: Requested device 0000:b5:02.0 cannot be used 00:22:27.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.207 EAL: Requested device 0000:b5:02.1 cannot be used 00:22:27.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.207 EAL: Requested device 0000:b5:02.2 cannot be used 00:22:27.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.207 EAL: Requested device 0000:b5:02.3 cannot be used 00:22:27.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.207 EAL: Requested device 0000:b5:02.4 cannot be used 00:22:27.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.207 EAL: Requested device 0000:b5:02.5 cannot be used 00:22:27.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.207 EAL: Requested device 0000:b5:02.6 cannot be used 00:22:27.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:27.207 EAL: Requested device 0000:b5:02.7 cannot be used 00:22:27.207 [2024-07-24 18:25:35.754422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.466 [2024-07-24 18:25:35.828478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.466 [2024-07-24 18:25:35.880681] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.466 [2024-07-24 18:25:35.880706] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:28.034 [2024-07-24 18:25:36.607643] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:28.034 [2024-07-24 18:25:36.607670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:28.034 [2024-07-24 18:25:36.607677] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:28.034 [2024-07-24 18:25:36.607684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.034 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.292 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.292 "name": "Existed_Raid", 00:22:28.292 "uuid": "c0d62059-1261-4d3d-b99a-f329cd498043", 00:22:28.292 "strip_size_kb": 0, 00:22:28.292 "state": "configuring", 00:22:28.292 "raid_level": "raid1", 00:22:28.292 "superblock": true, 00:22:28.292 "num_base_bdevs": 2, 00:22:28.292 "num_base_bdevs_discovered": 0, 00:22:28.292 "num_base_bdevs_operational": 2, 00:22:28.292 "base_bdevs_list": [ 00:22:28.292 { 00:22:28.292 "name": "BaseBdev1", 00:22:28.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.292 "is_configured": false, 00:22:28.292 "data_offset": 0, 00:22:28.292 "data_size": 0 00:22:28.292 }, 00:22:28.292 { 00:22:28.292 "name": "BaseBdev2", 00:22:28.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.292 "is_configured": false, 00:22:28.292 "data_offset": 0, 00:22:28.292 "data_size": 0 00:22:28.292 } 00:22:28.292 ] 00:22:28.292 }' 00:22:28.292 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.292 18:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.860 18:25:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:28.860 [2024-07-24 18:25:37.433697] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:28.860 [2024-07-24 18:25:37.433719] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x128f1a0 name Existed_Raid, state configuring 00:22:28.860 18:25:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:29.118 [2024-07-24 18:25:37.614167] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:29.118 [2024-07-24 18:25:37.614190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:29.118 [2024-07-24 18:25:37.614196] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:29.118 [2024-07-24 18:25:37.614206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:29.118 18:25:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:22:29.377 [2024-07-24 18:25:37.795749] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:29.377 BaseBdev1 00:22:29.377 18:25:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:29.377 18:25:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:29.377 18:25:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:29.377 18:25:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:22:29.377 18:25:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:29.377 18:25:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:29.377 18:25:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:29.636 18:25:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:29.636 [ 00:22:29.636 { 00:22:29.636 "name": "BaseBdev1", 00:22:29.636 "aliases": [ 00:22:29.636 "dcfd1c51-6e83-420a-ae5a-ed00e1103034" 00:22:29.636 ], 00:22:29.636 "product_name": "Malloc disk", 00:22:29.636 "block_size": 4096, 00:22:29.636 "num_blocks": 8192, 00:22:29.636 "uuid": "dcfd1c51-6e83-420a-ae5a-ed00e1103034", 00:22:29.636 "md_size": 32, 00:22:29.636 "md_interleave": false, 00:22:29.636 "dif_type": 0, 00:22:29.636 "assigned_rate_limits": { 00:22:29.636 "rw_ios_per_sec": 0, 00:22:29.636 "rw_mbytes_per_sec": 0, 00:22:29.636 "r_mbytes_per_sec": 0, 00:22:29.636 "w_mbytes_per_sec": 0 00:22:29.636 }, 00:22:29.636 "claimed": true, 00:22:29.636 "claim_type": "exclusive_write", 00:22:29.636 "zoned": false, 00:22:29.637 "supported_io_types": { 00:22:29.637 "read": true, 00:22:29.637 "write": true, 00:22:29.637 "unmap": true, 00:22:29.637 "flush": true, 00:22:29.637 "reset": true, 00:22:29.637 "nvme_admin": false, 00:22:29.637 "nvme_io": false, 00:22:29.637 "nvme_io_md": false, 00:22:29.637 "write_zeroes": true, 00:22:29.637 "zcopy": true, 00:22:29.637 "get_zone_info": false, 00:22:29.637 "zone_management": false, 00:22:29.637 "zone_append": false, 00:22:29.637 "compare": false, 00:22:29.637 "compare_and_write": false, 00:22:29.637 "abort": true, 00:22:29.637 "seek_hole": false, 00:22:29.637 "seek_data": false, 00:22:29.637 "copy": true, 00:22:29.637 "nvme_iov_md": false 00:22:29.637 }, 00:22:29.637 "memory_domains": [ 00:22:29.637 { 00:22:29.637 "dma_device_id": "system", 00:22:29.637 "dma_device_type": 1 00:22:29.637 }, 00:22:29.637 { 00:22:29.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.637 "dma_device_type": 2 00:22:29.637 } 00:22:29.637 ], 00:22:29.637 "driver_specific": {} 00:22:29.637 } 00:22:29.637 ] 00:22:29.637 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:22:29.637 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:29.637 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:29.637 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:29.637 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:29.637 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:29.637 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:29.637 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:29.637 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:29.637 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:29.637 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:29.637 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.637 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.896 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:29.896 "name": "Existed_Raid", 00:22:29.896 "uuid": "e920f9b2-55e7-4063-8a0b-cda2a0707b05", 00:22:29.896 "strip_size_kb": 0, 00:22:29.896 "state": "configuring", 00:22:29.896 "raid_level": "raid1", 00:22:29.896 "superblock": true, 00:22:29.896 "num_base_bdevs": 2, 00:22:29.896 "num_base_bdevs_discovered": 1, 00:22:29.896 "num_base_bdevs_operational": 2, 00:22:29.896 "base_bdevs_list": [ 00:22:29.896 { 00:22:29.896 "name": "BaseBdev1", 00:22:29.896 "uuid": "dcfd1c51-6e83-420a-ae5a-ed00e1103034", 00:22:29.896 "is_configured": true, 00:22:29.896 "data_offset": 256, 00:22:29.896 "data_size": 7936 00:22:29.896 }, 00:22:29.896 { 00:22:29.896 "name": "BaseBdev2", 00:22:29.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.896 "is_configured": false, 00:22:29.896 "data_offset": 0, 00:22:29.896 "data_size": 0 00:22:29.896 } 00:22:29.896 ] 00:22:29.896 }' 00:22:29.896 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:29.896 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.464 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:30.464 [2024-07-24 18:25:38.962758] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:30.464 [2024-07-24 18:25:38.962797] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x128ea90 name Existed_Raid, state configuring 00:22:30.464 18:25:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:30.722 [2024-07-24 18:25:39.131221] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:30.722 [2024-07-24 18:25:39.132290] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:30.722 [2024-07-24 18:25:39.132318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:30.722 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:30.722 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:30.722 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:30.722 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:30.722 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:30.722 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:30.722 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:30.722 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:30.723 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.723 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.723 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.723 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.723 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.723 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.981 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.981 "name": "Existed_Raid", 00:22:30.981 "uuid": "717727f4-4b97-4d37-a763-77d519fe114f", 00:22:30.981 "strip_size_kb": 0, 00:22:30.981 "state": "configuring", 00:22:30.981 "raid_level": "raid1", 00:22:30.981 "superblock": true, 00:22:30.981 "num_base_bdevs": 2, 00:22:30.981 "num_base_bdevs_discovered": 1, 00:22:30.981 "num_base_bdevs_operational": 2, 00:22:30.981 "base_bdevs_list": [ 00:22:30.981 { 00:22:30.981 "name": "BaseBdev1", 00:22:30.981 "uuid": "dcfd1c51-6e83-420a-ae5a-ed00e1103034", 00:22:30.981 "is_configured": true, 00:22:30.981 "data_offset": 256, 00:22:30.981 "data_size": 7936 00:22:30.981 }, 00:22:30.981 { 00:22:30.981 "name": "BaseBdev2", 00:22:30.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.981 "is_configured": false, 00:22:30.981 "data_offset": 0, 00:22:30.981 "data_size": 0 00:22:30.981 } 00:22:30.981 ] 00:22:30.981 }' 00:22:30.981 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.981 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.240 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:22:31.499 [2024-07-24 18:25:39.960754] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:31.499 [2024-07-24 18:25:39.960859] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x128e1d0 00:22:31.499 [2024-07-24 18:25:39.960868] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:31.499 [2024-07-24 18:25:39.960910] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x128dc10 00:22:31.499 [2024-07-24 18:25:39.960975] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x128e1d0 00:22:31.499 [2024-07-24 18:25:39.960981] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x128e1d0 00:22:31.499 [2024-07-24 18:25:39.961024] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.499 BaseBdev2 00:22:31.499 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:31.499 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:31.499 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:31.499 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:22:31.499 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:31.500 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:31.500 18:25:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:31.758 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:31.758 [ 00:22:31.758 { 00:22:31.758 "name": "BaseBdev2", 00:22:31.758 "aliases": [ 00:22:31.758 "e20ad940-594f-401e-8b0c-71a55e2e5c2b" 00:22:31.758 ], 00:22:31.758 "product_name": "Malloc disk", 00:22:31.759 "block_size": 4096, 00:22:31.759 "num_blocks": 8192, 00:22:31.759 "uuid": "e20ad940-594f-401e-8b0c-71a55e2e5c2b", 00:22:31.759 "md_size": 32, 00:22:31.759 "md_interleave": false, 00:22:31.759 "dif_type": 0, 00:22:31.759 "assigned_rate_limits": { 00:22:31.759 "rw_ios_per_sec": 0, 00:22:31.759 "rw_mbytes_per_sec": 0, 00:22:31.759 "r_mbytes_per_sec": 0, 00:22:31.759 "w_mbytes_per_sec": 0 00:22:31.759 }, 00:22:31.759 "claimed": true, 00:22:31.759 "claim_type": "exclusive_write", 00:22:31.759 "zoned": false, 00:22:31.759 "supported_io_types": { 00:22:31.759 "read": true, 00:22:31.759 "write": true, 00:22:31.759 "unmap": true, 00:22:31.759 "flush": true, 00:22:31.759 "reset": true, 00:22:31.759 "nvme_admin": false, 00:22:31.759 "nvme_io": false, 00:22:31.759 "nvme_io_md": false, 00:22:31.759 "write_zeroes": true, 00:22:31.759 "zcopy": true, 00:22:31.759 "get_zone_info": false, 00:22:31.759 "zone_management": false, 00:22:31.759 "zone_append": false, 00:22:31.759 "compare": false, 00:22:31.759 "compare_and_write": false, 00:22:31.759 "abort": true, 00:22:31.759 "seek_hole": false, 00:22:31.759 "seek_data": false, 00:22:31.759 "copy": true, 00:22:31.759 "nvme_iov_md": false 00:22:31.759 }, 00:22:31.759 "memory_domains": [ 00:22:31.759 { 00:22:31.759 "dma_device_id": "system", 00:22:31.759 "dma_device_type": 1 00:22:31.759 }, 00:22:31.759 { 00:22:31.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.759 "dma_device_type": 2 00:22:31.759 } 00:22:31.759 ], 00:22:31.759 "driver_specific": {} 00:22:31.759 } 00:22:31.759 ] 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.759 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.017 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.017 "name": "Existed_Raid", 00:22:32.017 "uuid": "717727f4-4b97-4d37-a763-77d519fe114f", 00:22:32.017 "strip_size_kb": 0, 00:22:32.017 "state": "online", 00:22:32.017 "raid_level": "raid1", 00:22:32.017 "superblock": true, 00:22:32.017 "num_base_bdevs": 2, 00:22:32.017 "num_base_bdevs_discovered": 2, 00:22:32.017 "num_base_bdevs_operational": 2, 00:22:32.017 "base_bdevs_list": [ 00:22:32.017 { 00:22:32.017 "name": "BaseBdev1", 00:22:32.017 "uuid": "dcfd1c51-6e83-420a-ae5a-ed00e1103034", 00:22:32.017 "is_configured": true, 00:22:32.017 "data_offset": 256, 00:22:32.017 "data_size": 7936 00:22:32.017 }, 00:22:32.017 { 00:22:32.017 "name": "BaseBdev2", 00:22:32.017 "uuid": "e20ad940-594f-401e-8b0c-71a55e2e5c2b", 00:22:32.017 "is_configured": true, 00:22:32.017 "data_offset": 256, 00:22:32.017 "data_size": 7936 00:22:32.017 } 00:22:32.017 ] 00:22:32.017 }' 00:22:32.017 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.017 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.583 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:32.583 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:32.583 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:32.583 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:32.583 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:32.583 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:22:32.583 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:32.583 18:25:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:32.583 [2024-07-24 18:25:41.107905] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:32.583 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:32.583 "name": "Existed_Raid", 00:22:32.583 "aliases": [ 00:22:32.583 "717727f4-4b97-4d37-a763-77d519fe114f" 00:22:32.583 ], 00:22:32.583 "product_name": "Raid Volume", 00:22:32.583 "block_size": 4096, 00:22:32.583 "num_blocks": 7936, 00:22:32.583 "uuid": "717727f4-4b97-4d37-a763-77d519fe114f", 00:22:32.583 "md_size": 32, 00:22:32.583 "md_interleave": false, 00:22:32.583 "dif_type": 0, 00:22:32.583 "assigned_rate_limits": { 00:22:32.583 "rw_ios_per_sec": 0, 00:22:32.583 "rw_mbytes_per_sec": 0, 00:22:32.583 "r_mbytes_per_sec": 0, 00:22:32.583 "w_mbytes_per_sec": 0 00:22:32.583 }, 00:22:32.583 "claimed": false, 00:22:32.583 "zoned": false, 00:22:32.583 "supported_io_types": { 00:22:32.583 "read": true, 00:22:32.583 "write": true, 00:22:32.583 "unmap": false, 00:22:32.583 "flush": false, 00:22:32.583 "reset": true, 00:22:32.583 "nvme_admin": false, 00:22:32.583 "nvme_io": false, 00:22:32.583 "nvme_io_md": false, 00:22:32.583 "write_zeroes": true, 00:22:32.583 "zcopy": false, 00:22:32.583 "get_zone_info": false, 00:22:32.583 "zone_management": false, 00:22:32.583 "zone_append": false, 00:22:32.583 "compare": false, 00:22:32.583 "compare_and_write": false, 00:22:32.583 "abort": false, 00:22:32.583 "seek_hole": false, 00:22:32.583 "seek_data": false, 00:22:32.583 "copy": false, 00:22:32.583 "nvme_iov_md": false 00:22:32.583 }, 00:22:32.583 "memory_domains": [ 00:22:32.583 { 00:22:32.583 "dma_device_id": "system", 00:22:32.583 "dma_device_type": 1 00:22:32.583 }, 00:22:32.583 { 00:22:32.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.583 "dma_device_type": 2 00:22:32.583 }, 00:22:32.583 { 00:22:32.583 "dma_device_id": "system", 00:22:32.583 "dma_device_type": 1 00:22:32.583 }, 00:22:32.583 { 00:22:32.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.583 "dma_device_type": 2 00:22:32.583 } 00:22:32.583 ], 00:22:32.583 "driver_specific": { 00:22:32.583 "raid": { 00:22:32.583 "uuid": "717727f4-4b97-4d37-a763-77d519fe114f", 00:22:32.583 "strip_size_kb": 0, 00:22:32.583 "state": "online", 00:22:32.583 "raid_level": "raid1", 00:22:32.583 "superblock": true, 00:22:32.583 "num_base_bdevs": 2, 00:22:32.583 "num_base_bdevs_discovered": 2, 00:22:32.583 "num_base_bdevs_operational": 2, 00:22:32.583 "base_bdevs_list": [ 00:22:32.583 { 00:22:32.583 "name": "BaseBdev1", 00:22:32.583 "uuid": "dcfd1c51-6e83-420a-ae5a-ed00e1103034", 00:22:32.583 "is_configured": true, 00:22:32.583 "data_offset": 256, 00:22:32.583 "data_size": 7936 00:22:32.583 }, 00:22:32.583 { 00:22:32.583 "name": "BaseBdev2", 00:22:32.583 "uuid": "e20ad940-594f-401e-8b0c-71a55e2e5c2b", 00:22:32.583 "is_configured": true, 00:22:32.583 "data_offset": 256, 00:22:32.583 "data_size": 7936 00:22:32.583 } 00:22:32.583 ] 00:22:32.583 } 00:22:32.583 } 00:22:32.583 }' 00:22:32.583 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:32.583 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:32.583 BaseBdev2' 00:22:32.583 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:32.583 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:32.584 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:32.841 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:32.841 "name": "BaseBdev1", 00:22:32.841 "aliases": [ 00:22:32.841 "dcfd1c51-6e83-420a-ae5a-ed00e1103034" 00:22:32.841 ], 00:22:32.841 "product_name": "Malloc disk", 00:22:32.841 "block_size": 4096, 00:22:32.841 "num_blocks": 8192, 00:22:32.841 "uuid": "dcfd1c51-6e83-420a-ae5a-ed00e1103034", 00:22:32.841 "md_size": 32, 00:22:32.841 "md_interleave": false, 00:22:32.841 "dif_type": 0, 00:22:32.841 "assigned_rate_limits": { 00:22:32.841 "rw_ios_per_sec": 0, 00:22:32.841 "rw_mbytes_per_sec": 0, 00:22:32.841 "r_mbytes_per_sec": 0, 00:22:32.841 "w_mbytes_per_sec": 0 00:22:32.841 }, 00:22:32.841 "claimed": true, 00:22:32.841 "claim_type": "exclusive_write", 00:22:32.841 "zoned": false, 00:22:32.841 "supported_io_types": { 00:22:32.841 "read": true, 00:22:32.841 "write": true, 00:22:32.841 "unmap": true, 00:22:32.841 "flush": true, 00:22:32.841 "reset": true, 00:22:32.841 "nvme_admin": false, 00:22:32.842 "nvme_io": false, 00:22:32.842 "nvme_io_md": false, 00:22:32.842 "write_zeroes": true, 00:22:32.842 "zcopy": true, 00:22:32.842 "get_zone_info": false, 00:22:32.842 "zone_management": false, 00:22:32.842 "zone_append": false, 00:22:32.842 "compare": false, 00:22:32.842 "compare_and_write": false, 00:22:32.842 "abort": true, 00:22:32.842 "seek_hole": false, 00:22:32.842 "seek_data": false, 00:22:32.842 "copy": true, 00:22:32.842 "nvme_iov_md": false 00:22:32.842 }, 00:22:32.842 "memory_domains": [ 00:22:32.842 { 00:22:32.842 "dma_device_id": "system", 00:22:32.842 "dma_device_type": 1 00:22:32.842 }, 00:22:32.842 { 00:22:32.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.842 "dma_device_type": 2 00:22:32.842 } 00:22:32.842 ], 00:22:32.842 "driver_specific": {} 00:22:32.842 }' 00:22:32.842 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:32.842 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:32.842 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:32.842 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:33.100 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:33.100 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:33.100 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:33.100 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:33.100 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:33.100 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:33.100 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:33.100 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:33.100 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:33.100 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:33.100 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:33.358 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:33.358 "name": "BaseBdev2", 00:22:33.358 "aliases": [ 00:22:33.358 "e20ad940-594f-401e-8b0c-71a55e2e5c2b" 00:22:33.358 ], 00:22:33.358 "product_name": "Malloc disk", 00:22:33.358 "block_size": 4096, 00:22:33.358 "num_blocks": 8192, 00:22:33.358 "uuid": "e20ad940-594f-401e-8b0c-71a55e2e5c2b", 00:22:33.358 "md_size": 32, 00:22:33.358 "md_interleave": false, 00:22:33.358 "dif_type": 0, 00:22:33.358 "assigned_rate_limits": { 00:22:33.358 "rw_ios_per_sec": 0, 00:22:33.358 "rw_mbytes_per_sec": 0, 00:22:33.358 "r_mbytes_per_sec": 0, 00:22:33.358 "w_mbytes_per_sec": 0 00:22:33.358 }, 00:22:33.358 "claimed": true, 00:22:33.358 "claim_type": "exclusive_write", 00:22:33.358 "zoned": false, 00:22:33.358 "supported_io_types": { 00:22:33.358 "read": true, 00:22:33.358 "write": true, 00:22:33.358 "unmap": true, 00:22:33.358 "flush": true, 00:22:33.358 "reset": true, 00:22:33.358 "nvme_admin": false, 00:22:33.358 "nvme_io": false, 00:22:33.358 "nvme_io_md": false, 00:22:33.358 "write_zeroes": true, 00:22:33.358 "zcopy": true, 00:22:33.358 "get_zone_info": false, 00:22:33.358 "zone_management": false, 00:22:33.358 "zone_append": false, 00:22:33.358 "compare": false, 00:22:33.358 "compare_and_write": false, 00:22:33.358 "abort": true, 00:22:33.358 "seek_hole": false, 00:22:33.358 "seek_data": false, 00:22:33.358 "copy": true, 00:22:33.358 "nvme_iov_md": false 00:22:33.358 }, 00:22:33.358 "memory_domains": [ 00:22:33.358 { 00:22:33.358 "dma_device_id": "system", 00:22:33.358 "dma_device_type": 1 00:22:33.358 }, 00:22:33.358 { 00:22:33.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.358 "dma_device_type": 2 00:22:33.358 } 00:22:33.358 ], 00:22:33.358 "driver_specific": {} 00:22:33.358 }' 00:22:33.358 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:33.358 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:33.358 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:33.358 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:33.358 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:33.358 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:33.358 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:33.617 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:33.617 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:33.617 18:25:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:33.617 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:33.617 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:33.617 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:33.617 [2024-07-24 18:25:42.206601] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.876 "name": "Existed_Raid", 00:22:33.876 "uuid": "717727f4-4b97-4d37-a763-77d519fe114f", 00:22:33.876 "strip_size_kb": 0, 00:22:33.876 "state": "online", 00:22:33.876 "raid_level": "raid1", 00:22:33.876 "superblock": true, 00:22:33.876 "num_base_bdevs": 2, 00:22:33.876 "num_base_bdevs_discovered": 1, 00:22:33.876 "num_base_bdevs_operational": 1, 00:22:33.876 "base_bdevs_list": [ 00:22:33.876 { 00:22:33.876 "name": null, 00:22:33.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.876 "is_configured": false, 00:22:33.876 "data_offset": 256, 00:22:33.876 "data_size": 7936 00:22:33.876 }, 00:22:33.876 { 00:22:33.876 "name": "BaseBdev2", 00:22:33.876 "uuid": "e20ad940-594f-401e-8b0c-71a55e2e5c2b", 00:22:33.876 "is_configured": true, 00:22:33.876 "data_offset": 256, 00:22:33.876 "data_size": 7936 00:22:33.876 } 00:22:33.876 ] 00:22:33.876 }' 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.876 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.444 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:34.444 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:34.444 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:34.444 18:25:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.702 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:34.702 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:34.702 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:34.702 [2024-07-24 18:25:43.194908] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:34.702 [2024-07-24 18:25:43.194972] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:34.702 [2024-07-24 18:25:43.205520] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:34.702 [2024-07-24 18:25:43.205543] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:34.702 [2024-07-24 18:25:43.205551] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x128e1d0 name Existed_Raid, state offline 00:22:34.702 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:34.702 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:34.702 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.702 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 2300422 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 2300422 ']' 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 2300422 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2300422 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2300422' 00:22:34.963 killing process with pid 2300422 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 2300422 00:22:34.963 [2024-07-24 18:25:43.437991] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:34.963 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 2300422 00:22:34.963 [2024-07-24 18:25:43.438786] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:35.224 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:22:35.224 00:22:35.224 real 0m8.011s 00:22:35.224 user 0m14.054s 00:22:35.224 sys 0m1.610s 00:22:35.224 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:35.224 18:25:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.224 ************************************ 00:22:35.224 END TEST raid_state_function_test_sb_md_separate 00:22:35.224 ************************************ 00:22:35.224 18:25:43 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:22:35.224 18:25:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:35.224 18:25:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:35.224 18:25:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:35.224 ************************************ 00:22:35.224 START TEST raid_superblock_test_md_separate 00:22:35.224 ************************************ 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=2302014 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 2302014 /var/tmp/spdk-raid.sock 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 2302014 ']' 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:35.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.224 18:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:35.224 [2024-07-24 18:25:43.736873] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:22:35.224 [2024-07-24 18:25:43.736918] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2302014 ] 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:01.0 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:01.1 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:01.2 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:01.3 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:01.4 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:01.5 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:01.6 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:01.7 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:02.0 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:02.1 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:02.2 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:02.3 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:02.4 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:02.5 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:02.6 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b3:02.7 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b5:01.0 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b5:01.1 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b5:01.2 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b5:01.3 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b5:01.4 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b5:01.5 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b5:01.6 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.224 EAL: Requested device 0000:b5:01.7 cannot be used 00:22:35.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.225 EAL: Requested device 0000:b5:02.0 cannot be used 00:22:35.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.225 EAL: Requested device 0000:b5:02.1 cannot be used 00:22:35.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.225 EAL: Requested device 0000:b5:02.2 cannot be used 00:22:35.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.225 EAL: Requested device 0000:b5:02.3 cannot be used 00:22:35.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.225 EAL: Requested device 0000:b5:02.4 cannot be used 00:22:35.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.225 EAL: Requested device 0000:b5:02.5 cannot be used 00:22:35.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.225 EAL: Requested device 0000:b5:02.6 cannot be used 00:22:35.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:35.225 EAL: Requested device 0000:b5:02.7 cannot be used 00:22:35.483 [2024-07-24 18:25:43.830842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.483 [2024-07-24 18:25:43.909267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.483 [2024-07-24 18:25:43.960435] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:35.483 [2024-07-24 18:25:43.960459] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:36.050 18:25:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:36.050 18:25:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:22:36.050 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:36.050 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:36.050 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:36.050 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:36.050 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:36.050 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:36.050 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:36.050 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:36.050 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:22:36.309 malloc1 00:22:36.309 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:36.309 [2024-07-24 18:25:44.845219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:36.309 [2024-07-24 18:25:44.845256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.309 [2024-07-24 18:25:44.845270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11e3f70 00:22:36.309 [2024-07-24 18:25:44.845279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.309 [2024-07-24 18:25:44.846306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.309 [2024-07-24 18:25:44.846328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:36.309 pt1 00:22:36.309 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:36.309 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:36.309 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:22:36.309 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:22:36.309 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:36.309 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:36.309 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:36.309 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:36.309 18:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:22:36.568 malloc2 00:22:36.568 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:36.827 [2024-07-24 18:25:45.178509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:36.827 [2024-07-24 18:25:45.178544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.827 [2024-07-24 18:25:45.178558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11d5990 00:22:36.827 [2024-07-24 18:25:45.178566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.827 [2024-07-24 18:25:45.179510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.827 [2024-07-24 18:25:45.179531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:36.827 pt2 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:22:36.827 [2024-07-24 18:25:45.342948] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:36.827 [2024-07-24 18:25:45.343823] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:36.827 [2024-07-24 18:25:45.343920] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x11d6460 00:22:36.827 [2024-07-24 18:25:45.343929] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:36.827 [2024-07-24 18:25:45.343979] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11ca1a0 00:22:36.827 [2024-07-24 18:25:45.344054] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11d6460 00:22:36.827 [2024-07-24 18:25:45.344060] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11d6460 00:22:36.827 [2024-07-24 18:25:45.344103] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.827 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.127 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:37.127 "name": "raid_bdev1", 00:22:37.127 "uuid": "75b9aecf-5da2-45ef-99ab-514e04623ea9", 00:22:37.127 "strip_size_kb": 0, 00:22:37.127 "state": "online", 00:22:37.127 "raid_level": "raid1", 00:22:37.127 "superblock": true, 00:22:37.127 "num_base_bdevs": 2, 00:22:37.127 "num_base_bdevs_discovered": 2, 00:22:37.127 "num_base_bdevs_operational": 2, 00:22:37.127 "base_bdevs_list": [ 00:22:37.127 { 00:22:37.127 "name": "pt1", 00:22:37.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:37.127 "is_configured": true, 00:22:37.127 "data_offset": 256, 00:22:37.127 "data_size": 7936 00:22:37.127 }, 00:22:37.127 { 00:22:37.127 "name": "pt2", 00:22:37.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:37.127 "is_configured": true, 00:22:37.127 "data_offset": 256, 00:22:37.127 "data_size": 7936 00:22:37.127 } 00:22:37.127 ] 00:22:37.127 }' 00:22:37.127 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:37.127 18:25:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.386 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:22:37.386 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:37.386 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:37.386 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:37.386 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:37.386 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:22:37.386 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:37.386 18:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:37.645 [2024-07-24 18:25:46.117100] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:37.645 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:37.645 "name": "raid_bdev1", 00:22:37.645 "aliases": [ 00:22:37.645 "75b9aecf-5da2-45ef-99ab-514e04623ea9" 00:22:37.645 ], 00:22:37.645 "product_name": "Raid Volume", 00:22:37.645 "block_size": 4096, 00:22:37.645 "num_blocks": 7936, 00:22:37.645 "uuid": "75b9aecf-5da2-45ef-99ab-514e04623ea9", 00:22:37.645 "md_size": 32, 00:22:37.645 "md_interleave": false, 00:22:37.645 "dif_type": 0, 00:22:37.645 "assigned_rate_limits": { 00:22:37.645 "rw_ios_per_sec": 0, 00:22:37.645 "rw_mbytes_per_sec": 0, 00:22:37.645 "r_mbytes_per_sec": 0, 00:22:37.645 "w_mbytes_per_sec": 0 00:22:37.645 }, 00:22:37.645 "claimed": false, 00:22:37.645 "zoned": false, 00:22:37.645 "supported_io_types": { 00:22:37.645 "read": true, 00:22:37.645 "write": true, 00:22:37.645 "unmap": false, 00:22:37.645 "flush": false, 00:22:37.645 "reset": true, 00:22:37.645 "nvme_admin": false, 00:22:37.645 "nvme_io": false, 00:22:37.645 "nvme_io_md": false, 00:22:37.645 "write_zeroes": true, 00:22:37.645 "zcopy": false, 00:22:37.645 "get_zone_info": false, 00:22:37.645 "zone_management": false, 00:22:37.645 "zone_append": false, 00:22:37.645 "compare": false, 00:22:37.645 "compare_and_write": false, 00:22:37.645 "abort": false, 00:22:37.645 "seek_hole": false, 00:22:37.645 "seek_data": false, 00:22:37.645 "copy": false, 00:22:37.645 "nvme_iov_md": false 00:22:37.645 }, 00:22:37.645 "memory_domains": [ 00:22:37.645 { 00:22:37.645 "dma_device_id": "system", 00:22:37.645 "dma_device_type": 1 00:22:37.645 }, 00:22:37.645 { 00:22:37.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.645 "dma_device_type": 2 00:22:37.645 }, 00:22:37.645 { 00:22:37.645 "dma_device_id": "system", 00:22:37.645 "dma_device_type": 1 00:22:37.645 }, 00:22:37.645 { 00:22:37.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.645 "dma_device_type": 2 00:22:37.645 } 00:22:37.645 ], 00:22:37.645 "driver_specific": { 00:22:37.645 "raid": { 00:22:37.645 "uuid": "75b9aecf-5da2-45ef-99ab-514e04623ea9", 00:22:37.645 "strip_size_kb": 0, 00:22:37.646 "state": "online", 00:22:37.646 "raid_level": "raid1", 00:22:37.646 "superblock": true, 00:22:37.646 "num_base_bdevs": 2, 00:22:37.646 "num_base_bdevs_discovered": 2, 00:22:37.646 "num_base_bdevs_operational": 2, 00:22:37.646 "base_bdevs_list": [ 00:22:37.646 { 00:22:37.646 "name": "pt1", 00:22:37.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:37.646 "is_configured": true, 00:22:37.646 "data_offset": 256, 00:22:37.646 "data_size": 7936 00:22:37.646 }, 00:22:37.646 { 00:22:37.646 "name": "pt2", 00:22:37.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:37.646 "is_configured": true, 00:22:37.646 "data_offset": 256, 00:22:37.646 "data_size": 7936 00:22:37.646 } 00:22:37.646 ] 00:22:37.646 } 00:22:37.646 } 00:22:37.646 }' 00:22:37.646 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:37.646 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:37.646 pt2' 00:22:37.646 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:37.646 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:37.646 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:37.905 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:37.905 "name": "pt1", 00:22:37.905 "aliases": [ 00:22:37.905 "00000000-0000-0000-0000-000000000001" 00:22:37.905 ], 00:22:37.905 "product_name": "passthru", 00:22:37.905 "block_size": 4096, 00:22:37.905 "num_blocks": 8192, 00:22:37.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:37.905 "md_size": 32, 00:22:37.905 "md_interleave": false, 00:22:37.905 "dif_type": 0, 00:22:37.905 "assigned_rate_limits": { 00:22:37.905 "rw_ios_per_sec": 0, 00:22:37.905 "rw_mbytes_per_sec": 0, 00:22:37.905 "r_mbytes_per_sec": 0, 00:22:37.905 "w_mbytes_per_sec": 0 00:22:37.905 }, 00:22:37.905 "claimed": true, 00:22:37.905 "claim_type": "exclusive_write", 00:22:37.905 "zoned": false, 00:22:37.905 "supported_io_types": { 00:22:37.905 "read": true, 00:22:37.905 "write": true, 00:22:37.905 "unmap": true, 00:22:37.905 "flush": true, 00:22:37.905 "reset": true, 00:22:37.905 "nvme_admin": false, 00:22:37.905 "nvme_io": false, 00:22:37.905 "nvme_io_md": false, 00:22:37.905 "write_zeroes": true, 00:22:37.905 "zcopy": true, 00:22:37.905 "get_zone_info": false, 00:22:37.905 "zone_management": false, 00:22:37.905 "zone_append": false, 00:22:37.905 "compare": false, 00:22:37.905 "compare_and_write": false, 00:22:37.905 "abort": true, 00:22:37.905 "seek_hole": false, 00:22:37.905 "seek_data": false, 00:22:37.905 "copy": true, 00:22:37.905 "nvme_iov_md": false 00:22:37.905 }, 00:22:37.905 "memory_domains": [ 00:22:37.905 { 00:22:37.905 "dma_device_id": "system", 00:22:37.905 "dma_device_type": 1 00:22:37.905 }, 00:22:37.905 { 00:22:37.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.905 "dma_device_type": 2 00:22:37.905 } 00:22:37.905 ], 00:22:37.905 "driver_specific": { 00:22:37.905 "passthru": { 00:22:37.905 "name": "pt1", 00:22:37.905 "base_bdev_name": "malloc1" 00:22:37.905 } 00:22:37.905 } 00:22:37.905 }' 00:22:37.905 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:37.905 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:37.905 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:37.905 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:37.905 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:37.905 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:37.905 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.164 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.164 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:38.164 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.164 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.164 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:38.164 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:38.164 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:38.164 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:38.423 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:38.423 "name": "pt2", 00:22:38.423 "aliases": [ 00:22:38.423 "00000000-0000-0000-0000-000000000002" 00:22:38.423 ], 00:22:38.423 "product_name": "passthru", 00:22:38.423 "block_size": 4096, 00:22:38.423 "num_blocks": 8192, 00:22:38.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:38.423 "md_size": 32, 00:22:38.423 "md_interleave": false, 00:22:38.423 "dif_type": 0, 00:22:38.423 "assigned_rate_limits": { 00:22:38.423 "rw_ios_per_sec": 0, 00:22:38.423 "rw_mbytes_per_sec": 0, 00:22:38.423 "r_mbytes_per_sec": 0, 00:22:38.423 "w_mbytes_per_sec": 0 00:22:38.423 }, 00:22:38.423 "claimed": true, 00:22:38.423 "claim_type": "exclusive_write", 00:22:38.423 "zoned": false, 00:22:38.423 "supported_io_types": { 00:22:38.423 "read": true, 00:22:38.423 "write": true, 00:22:38.423 "unmap": true, 00:22:38.423 "flush": true, 00:22:38.423 "reset": true, 00:22:38.423 "nvme_admin": false, 00:22:38.423 "nvme_io": false, 00:22:38.423 "nvme_io_md": false, 00:22:38.423 "write_zeroes": true, 00:22:38.423 "zcopy": true, 00:22:38.423 "get_zone_info": false, 00:22:38.423 "zone_management": false, 00:22:38.423 "zone_append": false, 00:22:38.423 "compare": false, 00:22:38.423 "compare_and_write": false, 00:22:38.423 "abort": true, 00:22:38.423 "seek_hole": false, 00:22:38.423 "seek_data": false, 00:22:38.423 "copy": true, 00:22:38.423 "nvme_iov_md": false 00:22:38.423 }, 00:22:38.423 "memory_domains": [ 00:22:38.423 { 00:22:38.423 "dma_device_id": "system", 00:22:38.423 "dma_device_type": 1 00:22:38.423 }, 00:22:38.423 { 00:22:38.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.423 "dma_device_type": 2 00:22:38.423 } 00:22:38.423 ], 00:22:38.423 "driver_specific": { 00:22:38.423 "passthru": { 00:22:38.423 "name": "pt2", 00:22:38.423 "base_bdev_name": "malloc2" 00:22:38.423 } 00:22:38.423 } 00:22:38.423 }' 00:22:38.423 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:38.423 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:38.423 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:38.424 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:38.424 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:38.424 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:38.424 18:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.683 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.683 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:38.683 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.683 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.683 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:38.683 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:38.683 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:22:38.942 [2024-07-24 18:25:47.288088] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:38.942 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=75b9aecf-5da2-45ef-99ab-514e04623ea9 00:22:38.942 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 75b9aecf-5da2-45ef-99ab-514e04623ea9 ']' 00:22:38.942 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:38.942 [2024-07-24 18:25:47.452350] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:38.942 [2024-07-24 18:25:47.452363] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:38.942 [2024-07-24 18:25:47.452403] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:38.942 [2024-07-24 18:25:47.452440] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:38.942 [2024-07-24 18:25:47.452447] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11d6460 name raid_bdev1, state offline 00:22:38.942 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.942 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:22:39.201 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:22:39.201 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:22:39.201 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:39.201 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:39.201 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:39.201 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:39.460 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:39.460 18:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:39.719 [2024-07-24 18:25:48.290490] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:39.719 [2024-07-24 18:25:48.291421] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:39.719 [2024-07-24 18:25:48.291463] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:39.719 [2024-07-24 18:25:48.291490] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:39.719 [2024-07-24 18:25:48.291502] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:39.719 [2024-07-24 18:25:48.291524] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1046610 name raid_bdev1, state configuring 00:22:39.719 request: 00:22:39.719 { 00:22:39.719 "name": "raid_bdev1", 00:22:39.719 "raid_level": "raid1", 00:22:39.719 "base_bdevs": [ 00:22:39.719 "malloc1", 00:22:39.719 "malloc2" 00:22:39.719 ], 00:22:39.719 "superblock": false, 00:22:39.719 "method": "bdev_raid_create", 00:22:39.719 "req_id": 1 00:22:39.719 } 00:22:39.719 Got JSON-RPC error response 00:22:39.719 response: 00:22:39.719 { 00:22:39.719 "code": -17, 00:22:39.719 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:39.719 } 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:39.719 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:39.720 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:22:39.720 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.978 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:22:39.978 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:22:39.978 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:40.237 [2024-07-24 18:25:48.635351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:40.237 [2024-07-24 18:25:48.635376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.237 [2024-07-24 18:25:48.635386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11e4620 00:22:40.237 [2024-07-24 18:25:48.635415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.237 [2024-07-24 18:25:48.636424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.237 [2024-07-24 18:25:48.636445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:40.237 [2024-07-24 18:25:48.636477] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:40.237 [2024-07-24 18:25:48.636495] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:40.237 pt1 00:22:40.237 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:40.237 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:40.237 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:40.237 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:40.237 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:40.237 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:40.238 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:40.238 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:40.238 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:40.238 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:40.238 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.238 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.238 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.238 "name": "raid_bdev1", 00:22:40.238 "uuid": "75b9aecf-5da2-45ef-99ab-514e04623ea9", 00:22:40.238 "strip_size_kb": 0, 00:22:40.238 "state": "configuring", 00:22:40.238 "raid_level": "raid1", 00:22:40.238 "superblock": true, 00:22:40.238 "num_base_bdevs": 2, 00:22:40.238 "num_base_bdevs_discovered": 1, 00:22:40.238 "num_base_bdevs_operational": 2, 00:22:40.238 "base_bdevs_list": [ 00:22:40.238 { 00:22:40.238 "name": "pt1", 00:22:40.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:40.238 "is_configured": true, 00:22:40.238 "data_offset": 256, 00:22:40.238 "data_size": 7936 00:22:40.238 }, 00:22:40.238 { 00:22:40.238 "name": null, 00:22:40.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:40.238 "is_configured": false, 00:22:40.238 "data_offset": 256, 00:22:40.238 "data_size": 7936 00:22:40.238 } 00:22:40.238 ] 00:22:40.238 }' 00:22:40.238 18:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.238 18:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:40.806 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:22:40.806 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:40.806 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:40.806 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:41.064 [2024-07-24 18:25:49.477567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:41.064 [2024-07-24 18:25:49.477600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.064 [2024-07-24 18:25:49.477612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1046bd0 00:22:41.064 [2024-07-24 18:25:49.477641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.064 [2024-07-24 18:25:49.477788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.064 [2024-07-24 18:25:49.477799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:41.064 [2024-07-24 18:25:49.477834] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:41.064 [2024-07-24 18:25:49.477846] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:41.064 [2024-07-24 18:25:49.477906] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x11cad10 00:22:41.064 [2024-07-24 18:25:49.477913] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:41.064 [2024-07-24 18:25:49.477951] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11cbf40 00:22:41.064 [2024-07-24 18:25:49.478017] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11cad10 00:22:41.064 [2024-07-24 18:25:49.478024] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11cad10 00:22:41.064 [2024-07-24 18:25:49.478069] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.064 pt2 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.065 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.324 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:41.324 "name": "raid_bdev1", 00:22:41.324 "uuid": "75b9aecf-5da2-45ef-99ab-514e04623ea9", 00:22:41.324 "strip_size_kb": 0, 00:22:41.324 "state": "online", 00:22:41.324 "raid_level": "raid1", 00:22:41.324 "superblock": true, 00:22:41.324 "num_base_bdevs": 2, 00:22:41.324 "num_base_bdevs_discovered": 2, 00:22:41.324 "num_base_bdevs_operational": 2, 00:22:41.324 "base_bdevs_list": [ 00:22:41.324 { 00:22:41.324 "name": "pt1", 00:22:41.324 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:41.324 "is_configured": true, 00:22:41.324 "data_offset": 256, 00:22:41.324 "data_size": 7936 00:22:41.324 }, 00:22:41.324 { 00:22:41.324 "name": "pt2", 00:22:41.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:41.324 "is_configured": true, 00:22:41.324 "data_offset": 256, 00:22:41.324 "data_size": 7936 00:22:41.324 } 00:22:41.324 ] 00:22:41.324 }' 00:22:41.324 18:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:41.324 18:25:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:41.582 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:22:41.582 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:41.582 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:41.582 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:41.582 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:41.582 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:22:41.582 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:41.582 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:41.840 [2024-07-24 18:25:50.307886] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:41.840 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:41.840 "name": "raid_bdev1", 00:22:41.840 "aliases": [ 00:22:41.840 "75b9aecf-5da2-45ef-99ab-514e04623ea9" 00:22:41.840 ], 00:22:41.840 "product_name": "Raid Volume", 00:22:41.840 "block_size": 4096, 00:22:41.840 "num_blocks": 7936, 00:22:41.840 "uuid": "75b9aecf-5da2-45ef-99ab-514e04623ea9", 00:22:41.840 "md_size": 32, 00:22:41.840 "md_interleave": false, 00:22:41.840 "dif_type": 0, 00:22:41.840 "assigned_rate_limits": { 00:22:41.840 "rw_ios_per_sec": 0, 00:22:41.840 "rw_mbytes_per_sec": 0, 00:22:41.840 "r_mbytes_per_sec": 0, 00:22:41.840 "w_mbytes_per_sec": 0 00:22:41.840 }, 00:22:41.841 "claimed": false, 00:22:41.841 "zoned": false, 00:22:41.841 "supported_io_types": { 00:22:41.841 "read": true, 00:22:41.841 "write": true, 00:22:41.841 "unmap": false, 00:22:41.841 "flush": false, 00:22:41.841 "reset": true, 00:22:41.841 "nvme_admin": false, 00:22:41.841 "nvme_io": false, 00:22:41.841 "nvme_io_md": false, 00:22:41.841 "write_zeroes": true, 00:22:41.841 "zcopy": false, 00:22:41.841 "get_zone_info": false, 00:22:41.841 "zone_management": false, 00:22:41.841 "zone_append": false, 00:22:41.841 "compare": false, 00:22:41.841 "compare_and_write": false, 00:22:41.841 "abort": false, 00:22:41.841 "seek_hole": false, 00:22:41.841 "seek_data": false, 00:22:41.841 "copy": false, 00:22:41.841 "nvme_iov_md": false 00:22:41.841 }, 00:22:41.841 "memory_domains": [ 00:22:41.841 { 00:22:41.841 "dma_device_id": "system", 00:22:41.841 "dma_device_type": 1 00:22:41.841 }, 00:22:41.841 { 00:22:41.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.841 "dma_device_type": 2 00:22:41.841 }, 00:22:41.841 { 00:22:41.841 "dma_device_id": "system", 00:22:41.841 "dma_device_type": 1 00:22:41.841 }, 00:22:41.841 { 00:22:41.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.841 "dma_device_type": 2 00:22:41.841 } 00:22:41.841 ], 00:22:41.841 "driver_specific": { 00:22:41.841 "raid": { 00:22:41.841 "uuid": "75b9aecf-5da2-45ef-99ab-514e04623ea9", 00:22:41.841 "strip_size_kb": 0, 00:22:41.841 "state": "online", 00:22:41.841 "raid_level": "raid1", 00:22:41.841 "superblock": true, 00:22:41.841 "num_base_bdevs": 2, 00:22:41.841 "num_base_bdevs_discovered": 2, 00:22:41.841 "num_base_bdevs_operational": 2, 00:22:41.841 "base_bdevs_list": [ 00:22:41.841 { 00:22:41.841 "name": "pt1", 00:22:41.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:41.841 "is_configured": true, 00:22:41.841 "data_offset": 256, 00:22:41.841 "data_size": 7936 00:22:41.841 }, 00:22:41.841 { 00:22:41.841 "name": "pt2", 00:22:41.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:41.841 "is_configured": true, 00:22:41.841 "data_offset": 256, 00:22:41.841 "data_size": 7936 00:22:41.841 } 00:22:41.841 ] 00:22:41.841 } 00:22:41.841 } 00:22:41.841 }' 00:22:41.841 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:41.841 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:41.841 pt2' 00:22:41.841 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:41.841 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:41.841 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:42.100 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:42.100 "name": "pt1", 00:22:42.100 "aliases": [ 00:22:42.100 "00000000-0000-0000-0000-000000000001" 00:22:42.100 ], 00:22:42.100 "product_name": "passthru", 00:22:42.100 "block_size": 4096, 00:22:42.100 "num_blocks": 8192, 00:22:42.100 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:42.100 "md_size": 32, 00:22:42.100 "md_interleave": false, 00:22:42.100 "dif_type": 0, 00:22:42.100 "assigned_rate_limits": { 00:22:42.100 "rw_ios_per_sec": 0, 00:22:42.100 "rw_mbytes_per_sec": 0, 00:22:42.100 "r_mbytes_per_sec": 0, 00:22:42.100 "w_mbytes_per_sec": 0 00:22:42.100 }, 00:22:42.100 "claimed": true, 00:22:42.100 "claim_type": "exclusive_write", 00:22:42.100 "zoned": false, 00:22:42.100 "supported_io_types": { 00:22:42.100 "read": true, 00:22:42.100 "write": true, 00:22:42.100 "unmap": true, 00:22:42.100 "flush": true, 00:22:42.100 "reset": true, 00:22:42.100 "nvme_admin": false, 00:22:42.100 "nvme_io": false, 00:22:42.100 "nvme_io_md": false, 00:22:42.100 "write_zeroes": true, 00:22:42.100 "zcopy": true, 00:22:42.100 "get_zone_info": false, 00:22:42.100 "zone_management": false, 00:22:42.100 "zone_append": false, 00:22:42.100 "compare": false, 00:22:42.100 "compare_and_write": false, 00:22:42.100 "abort": true, 00:22:42.100 "seek_hole": false, 00:22:42.100 "seek_data": false, 00:22:42.100 "copy": true, 00:22:42.100 "nvme_iov_md": false 00:22:42.100 }, 00:22:42.100 "memory_domains": [ 00:22:42.100 { 00:22:42.100 "dma_device_id": "system", 00:22:42.100 "dma_device_type": 1 00:22:42.100 }, 00:22:42.100 { 00:22:42.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.100 "dma_device_type": 2 00:22:42.100 } 00:22:42.100 ], 00:22:42.100 "driver_specific": { 00:22:42.100 "passthru": { 00:22:42.100 "name": "pt1", 00:22:42.100 "base_bdev_name": "malloc1" 00:22:42.100 } 00:22:42.100 } 00:22:42.100 }' 00:22:42.100 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:42.100 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:42.100 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:42.100 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:42.100 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:42.100 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:42.100 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:42.359 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:42.359 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:42.359 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:42.359 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:42.359 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:42.359 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:42.359 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:42.359 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:42.617 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:42.617 "name": "pt2", 00:22:42.617 "aliases": [ 00:22:42.617 "00000000-0000-0000-0000-000000000002" 00:22:42.617 ], 00:22:42.617 "product_name": "passthru", 00:22:42.617 "block_size": 4096, 00:22:42.617 "num_blocks": 8192, 00:22:42.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:42.617 "md_size": 32, 00:22:42.617 "md_interleave": false, 00:22:42.617 "dif_type": 0, 00:22:42.617 "assigned_rate_limits": { 00:22:42.617 "rw_ios_per_sec": 0, 00:22:42.617 "rw_mbytes_per_sec": 0, 00:22:42.617 "r_mbytes_per_sec": 0, 00:22:42.617 "w_mbytes_per_sec": 0 00:22:42.617 }, 00:22:42.617 "claimed": true, 00:22:42.617 "claim_type": "exclusive_write", 00:22:42.617 "zoned": false, 00:22:42.617 "supported_io_types": { 00:22:42.617 "read": true, 00:22:42.617 "write": true, 00:22:42.617 "unmap": true, 00:22:42.617 "flush": true, 00:22:42.617 "reset": true, 00:22:42.617 "nvme_admin": false, 00:22:42.617 "nvme_io": false, 00:22:42.617 "nvme_io_md": false, 00:22:42.617 "write_zeroes": true, 00:22:42.617 "zcopy": true, 00:22:42.617 "get_zone_info": false, 00:22:42.617 "zone_management": false, 00:22:42.617 "zone_append": false, 00:22:42.617 "compare": false, 00:22:42.617 "compare_and_write": false, 00:22:42.617 "abort": true, 00:22:42.617 "seek_hole": false, 00:22:42.617 "seek_data": false, 00:22:42.617 "copy": true, 00:22:42.617 "nvme_iov_md": false 00:22:42.617 }, 00:22:42.617 "memory_domains": [ 00:22:42.617 { 00:22:42.617 "dma_device_id": "system", 00:22:42.617 "dma_device_type": 1 00:22:42.617 }, 00:22:42.617 { 00:22:42.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.617 "dma_device_type": 2 00:22:42.617 } 00:22:42.617 ], 00:22:42.617 "driver_specific": { 00:22:42.617 "passthru": { 00:22:42.617 "name": "pt2", 00:22:42.617 "base_bdev_name": "malloc2" 00:22:42.617 } 00:22:42.617 } 00:22:42.617 }' 00:22:42.617 18:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:42.617 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:42.617 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:42.617 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:42.617 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:42.618 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:42.618 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:42.618 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:42.876 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:42.876 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:42.876 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:42.876 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:42.876 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:42.877 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:22:42.877 [2024-07-24 18:25:51.470894] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 75b9aecf-5da2-45ef-99ab-514e04623ea9 '!=' 75b9aecf-5da2-45ef-99ab-514e04623ea9 ']' 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:43.136 [2024-07-24 18:25:51.647192] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.136 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.396 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:43.396 "name": "raid_bdev1", 00:22:43.396 "uuid": "75b9aecf-5da2-45ef-99ab-514e04623ea9", 00:22:43.396 "strip_size_kb": 0, 00:22:43.396 "state": "online", 00:22:43.396 "raid_level": "raid1", 00:22:43.396 "superblock": true, 00:22:43.396 "num_base_bdevs": 2, 00:22:43.396 "num_base_bdevs_discovered": 1, 00:22:43.396 "num_base_bdevs_operational": 1, 00:22:43.396 "base_bdevs_list": [ 00:22:43.396 { 00:22:43.396 "name": null, 00:22:43.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.396 "is_configured": false, 00:22:43.396 "data_offset": 256, 00:22:43.396 "data_size": 7936 00:22:43.396 }, 00:22:43.396 { 00:22:43.396 "name": "pt2", 00:22:43.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:43.396 "is_configured": true, 00:22:43.396 "data_offset": 256, 00:22:43.396 "data_size": 7936 00:22:43.396 } 00:22:43.396 ] 00:22:43.396 }' 00:22:43.396 18:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:43.396 18:25:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:43.964 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:43.964 [2024-07-24 18:25:52.469284] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:43.964 [2024-07-24 18:25:52.469305] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:43.964 [2024-07-24 18:25:52.469344] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:43.964 [2024-07-24 18:25:52.469374] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:43.964 [2024-07-24 18:25:52.469382] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11cad10 name raid_bdev1, state offline 00:22:43.964 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.964 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:22:44.223 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:22:44.223 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:22:44.223 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:22:44.223 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:44.223 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:44.482 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:22:44.482 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:44.482 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:22:44.482 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:22:44.482 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:22:44.482 18:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:44.482 [2024-07-24 18:25:52.986658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:44.482 [2024-07-24 18:25:52.986690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.482 [2024-07-24 18:25:52.986703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11c8da0 00:22:44.482 [2024-07-24 18:25:52.986728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.482 [2024-07-24 18:25:52.987737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.482 [2024-07-24 18:25:52.987756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:44.482 [2024-07-24 18:25:52.987786] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:44.482 [2024-07-24 18:25:52.987802] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:44.482 [2024-07-24 18:25:52.987851] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x11cb450 00:22:44.482 [2024-07-24 18:25:52.987858] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:44.482 [2024-07-24 18:25:52.987892] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11cbca0 00:22:44.482 [2024-07-24 18:25:52.987952] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11cb450 00:22:44.482 [2024-07-24 18:25:52.987958] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11cb450 00:22:44.482 [2024-07-24 18:25:52.987999] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.482 pt2 00:22:44.482 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:44.482 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:44.482 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:44.482 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:44.482 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:44.482 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:44.482 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.482 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.482 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.482 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.482 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.482 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.741 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:44.741 "name": "raid_bdev1", 00:22:44.741 "uuid": "75b9aecf-5da2-45ef-99ab-514e04623ea9", 00:22:44.741 "strip_size_kb": 0, 00:22:44.741 "state": "online", 00:22:44.741 "raid_level": "raid1", 00:22:44.741 "superblock": true, 00:22:44.741 "num_base_bdevs": 2, 00:22:44.741 "num_base_bdevs_discovered": 1, 00:22:44.741 "num_base_bdevs_operational": 1, 00:22:44.741 "base_bdevs_list": [ 00:22:44.741 { 00:22:44.741 "name": null, 00:22:44.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.741 "is_configured": false, 00:22:44.741 "data_offset": 256, 00:22:44.741 "data_size": 7936 00:22:44.741 }, 00:22:44.741 { 00:22:44.741 "name": "pt2", 00:22:44.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:44.741 "is_configured": true, 00:22:44.741 "data_offset": 256, 00:22:44.741 "data_size": 7936 00:22:44.741 } 00:22:44.741 ] 00:22:44.741 }' 00:22:44.741 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:44.741 18:25:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:45.306 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:45.306 [2024-07-24 18:25:53.812791] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:45.306 [2024-07-24 18:25:53.812811] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:45.306 [2024-07-24 18:25:53.812851] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.306 [2024-07-24 18:25:53.812884] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.306 [2024-07-24 18:25:53.812892] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11cb450 name raid_bdev1, state offline 00:22:45.306 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.306 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:22:45.565 18:25:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:22:45.565 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:22:45.565 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:22:45.565 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:45.565 [2024-07-24 18:25:54.153659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:45.565 [2024-07-24 18:25:54.153689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.565 [2024-07-24 18:25:54.153700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11c9ea0 00:22:45.565 [2024-07-24 18:25:54.153713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.565 [2024-07-24 18:25:54.154761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.565 [2024-07-24 18:25:54.154782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:45.565 [2024-07-24 18:25:54.154816] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:45.565 [2024-07-24 18:25:54.154832] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:45.565 [2024-07-24 18:25:54.154892] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:45.565 [2024-07-24 18:25:54.154901] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:45.565 [2024-07-24 18:25:54.154911] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11cbf90 name raid_bdev1, state configuring 00:22:45.565 [2024-07-24 18:25:54.154927] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:45.565 [2024-07-24 18:25:54.154961] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x11caf90 00:22:45.565 [2024-07-24 18:25:54.154967] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:45.565 [2024-07-24 18:25:54.155006] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11cbaf0 00:22:45.565 [2024-07-24 18:25:54.155072] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11caf90 00:22:45.565 [2024-07-24 18:25:54.155078] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11caf90 00:22:45.565 [2024-07-24 18:25:54.155126] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.565 pt1 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.825 "name": "raid_bdev1", 00:22:45.825 "uuid": "75b9aecf-5da2-45ef-99ab-514e04623ea9", 00:22:45.825 "strip_size_kb": 0, 00:22:45.825 "state": "online", 00:22:45.825 "raid_level": "raid1", 00:22:45.825 "superblock": true, 00:22:45.825 "num_base_bdevs": 2, 00:22:45.825 "num_base_bdevs_discovered": 1, 00:22:45.825 "num_base_bdevs_operational": 1, 00:22:45.825 "base_bdevs_list": [ 00:22:45.825 { 00:22:45.825 "name": null, 00:22:45.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.825 "is_configured": false, 00:22:45.825 "data_offset": 256, 00:22:45.825 "data_size": 7936 00:22:45.825 }, 00:22:45.825 { 00:22:45.825 "name": "pt2", 00:22:45.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:45.825 "is_configured": true, 00:22:45.825 "data_offset": 256, 00:22:45.825 "data_size": 7936 00:22:45.825 } 00:22:45.825 ] 00:22:45.825 }' 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.825 18:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:46.393 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:22:46.393 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:46.393 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:22:46.393 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:46.393 18:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:22:46.653 [2024-07-24 18:25:55.116289] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:46.653 18:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 75b9aecf-5da2-45ef-99ab-514e04623ea9 '!=' 75b9aecf-5da2-45ef-99ab-514e04623ea9 ']' 00:22:46.653 18:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 2302014 00:22:46.653 18:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 2302014 ']' 00:22:46.653 18:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 2302014 00:22:46.653 18:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:22:46.653 18:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:46.653 18:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2302014 00:22:46.653 18:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:46.653 18:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:46.653 18:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2302014' 00:22:46.653 killing process with pid 2302014 00:22:46.653 18:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 2302014 00:22:46.653 [2024-07-24 18:25:55.187705] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:46.653 [2024-07-24 18:25:55.187743] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:46.653 [2024-07-24 18:25:55.187777] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:46.653 [2024-07-24 18:25:55.187785] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11caf90 name raid_bdev1, state offline 00:22:46.653 18:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 2302014 00:22:46.653 [2024-07-24 18:25:55.207605] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:46.912 18:25:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:22:46.912 00:22:46.912 real 0m11.690s 00:22:46.912 user 0m21.094s 00:22:46.912 sys 0m2.301s 00:22:46.912 18:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:46.912 18:25:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:46.912 ************************************ 00:22:46.912 END TEST raid_superblock_test_md_separate 00:22:46.912 ************************************ 00:22:46.912 18:25:55 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:22:46.912 18:25:55 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:22:46.912 18:25:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:46.912 18:25:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:46.912 18:25:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:46.912 ************************************ 00:22:46.912 START TEST raid_rebuild_test_sb_md_separate 00:22:46.912 ************************************ 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=2304439 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 2304439 /var/tmp/spdk-raid.sock 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 2304439 ']' 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:46.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.912 18:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:47.171 [2024-07-24 18:25:55.523167] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:22:47.171 [2024-07-24 18:25:55.523212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304439 ] 00:22:47.171 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:47.171 Zero copy mechanism will not be used. 00:22:47.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.171 EAL: Requested device 0000:b3:01.0 cannot be used 00:22:47.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.171 EAL: Requested device 0000:b3:01.1 cannot be used 00:22:47.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.171 EAL: Requested device 0000:b3:01.2 cannot be used 00:22:47.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.171 EAL: Requested device 0000:b3:01.3 cannot be used 00:22:47.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.171 EAL: Requested device 0000:b3:01.4 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b3:01.5 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b3:01.6 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b3:01.7 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b3:02.0 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b3:02.1 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b3:02.2 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b3:02.3 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b3:02.4 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b3:02.5 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b3:02.6 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b3:02.7 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:01.0 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:01.1 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:01.2 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:01.3 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:01.4 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:01.5 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:01.6 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:01.7 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:02.0 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:02.1 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:02.2 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:02.3 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:02.4 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:02.5 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:02.6 cannot be used 00:22:47.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:47.172 EAL: Requested device 0000:b5:02.7 cannot be used 00:22:47.172 [2024-07-24 18:25:55.616724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.172 [2024-07-24 18:25:55.690236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.172 [2024-07-24 18:25:55.739747] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.172 [2024-07-24 18:25:55.739778] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.739 18:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:47.739 18:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:22:47.739 18:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:47.739 18:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:22:47.998 BaseBdev1_malloc 00:22:47.998 18:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:48.257 [2024-07-24 18:25:56.632035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:48.257 [2024-07-24 18:25:56.632067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.257 [2024-07-24 18:25:56.632083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2577d00 00:22:48.257 [2024-07-24 18:25:56.632107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.257 [2024-07-24 18:25:56.633138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.257 [2024-07-24 18:25:56.633159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:48.257 BaseBdev1 00:22:48.257 18:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:48.257 18:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:22:48.257 BaseBdev2_malloc 00:22:48.257 18:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:48.516 [2024-07-24 18:25:56.961244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:48.516 [2024-07-24 18:25:56.961275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.516 [2024-07-24 18:25:56.961292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26f8340 00:22:48.516 [2024-07-24 18:25:56.961300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.516 [2024-07-24 18:25:56.962244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.516 [2024-07-24 18:25:56.962265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:48.516 BaseBdev2 00:22:48.516 18:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:22:48.774 spare_malloc 00:22:48.775 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:48.775 spare_delay 00:22:48.775 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:49.033 [2024-07-24 18:25:57.466755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:49.033 [2024-07-24 18:25:57.466788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.033 [2024-07-24 18:25:57.466805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27033a0 00:22:49.033 [2024-07-24 18:25:57.466814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.033 [2024-07-24 18:25:57.467784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.033 [2024-07-24 18:25:57.467808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:49.033 spare 00:22:49.033 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:49.292 [2024-07-24 18:25:57.635211] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:49.292 [2024-07-24 18:25:57.636077] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:49.292 [2024-07-24 18:25:57.636200] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x256de70 00:22:49.292 [2024-07-24 18:25:57.636209] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:49.292 [2024-07-24 18:25:57.636261] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x256ea30 00:22:49.292 [2024-07-24 18:25:57.636340] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x256de70 00:22:49.292 [2024-07-24 18:25:57.636346] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x256de70 00:22:49.292 [2024-07-24 18:25:57.636393] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.292 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:49.292 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:49.292 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:49.292 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:49.292 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:49.292 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:49.292 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.293 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.293 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.293 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.293 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.293 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.293 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:49.293 "name": "raid_bdev1", 00:22:49.293 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:22:49.293 "strip_size_kb": 0, 00:22:49.293 "state": "online", 00:22:49.293 "raid_level": "raid1", 00:22:49.293 "superblock": true, 00:22:49.293 "num_base_bdevs": 2, 00:22:49.293 "num_base_bdevs_discovered": 2, 00:22:49.293 "num_base_bdevs_operational": 2, 00:22:49.293 "base_bdevs_list": [ 00:22:49.293 { 00:22:49.293 "name": "BaseBdev1", 00:22:49.293 "uuid": "4b392fc4-9e7d-5d7c-b1fa-ceef3464c9cb", 00:22:49.293 "is_configured": true, 00:22:49.293 "data_offset": 256, 00:22:49.293 "data_size": 7936 00:22:49.293 }, 00:22:49.293 { 00:22:49.293 "name": "BaseBdev2", 00:22:49.293 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:22:49.293 "is_configured": true, 00:22:49.293 "data_offset": 256, 00:22:49.293 "data_size": 7936 00:22:49.293 } 00:22:49.293 ] 00:22:49.293 }' 00:22:49.293 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:49.293 18:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:49.860 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:22:49.860 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:49.860 [2024-07-24 18:25:58.449453] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:50.119 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:50.378 [2024-07-24 18:25:58.794208] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x256ea30 00:22:50.378 /dev/nbd0 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:50.378 1+0 records in 00:22:50.378 1+0 records out 00:22:50.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233296 s, 17.6 MB/s 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:22:50.378 18:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:50.976 7936+0 records in 00:22:50.976 7936+0 records out 00:22:50.976 32505856 bytes (33 MB, 31 MiB) copied, 0.488357 s, 66.6 MB/s 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:50.976 [2024-07-24 18:25:59.529116] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:50.976 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:51.235 [2024-07-24 18:25:59.681546] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:51.235 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:51.235 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:51.235 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:51.235 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:51.235 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:51.235 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:51.235 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.235 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.235 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.235 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.235 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.235 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.495 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.495 "name": "raid_bdev1", 00:22:51.495 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:22:51.495 "strip_size_kb": 0, 00:22:51.495 "state": "online", 00:22:51.495 "raid_level": "raid1", 00:22:51.495 "superblock": true, 00:22:51.495 "num_base_bdevs": 2, 00:22:51.495 "num_base_bdevs_discovered": 1, 00:22:51.495 "num_base_bdevs_operational": 1, 00:22:51.495 "base_bdevs_list": [ 00:22:51.495 { 00:22:51.495 "name": null, 00:22:51.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.495 "is_configured": false, 00:22:51.495 "data_offset": 256, 00:22:51.495 "data_size": 7936 00:22:51.495 }, 00:22:51.495 { 00:22:51.495 "name": "BaseBdev2", 00:22:51.495 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:22:51.495 "is_configured": true, 00:22:51.495 "data_offset": 256, 00:22:51.495 "data_size": 7936 00:22:51.495 } 00:22:51.495 ] 00:22:51.495 }' 00:22:51.495 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.495 18:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:52.061 18:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:52.061 [2024-07-24 18:26:00.503667] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:52.061 [2024-07-24 18:26:00.505689] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25705b0 00:22:52.061 [2024-07-24 18:26:00.507310] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:52.061 18:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:52.997 18:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.997 18:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:52.997 18:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:52.997 18:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:52.997 18:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:52.997 18:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.997 18:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.256 18:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:53.256 "name": "raid_bdev1", 00:22:53.256 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:22:53.256 "strip_size_kb": 0, 00:22:53.256 "state": "online", 00:22:53.256 "raid_level": "raid1", 00:22:53.256 "superblock": true, 00:22:53.256 "num_base_bdevs": 2, 00:22:53.256 "num_base_bdevs_discovered": 2, 00:22:53.256 "num_base_bdevs_operational": 2, 00:22:53.256 "process": { 00:22:53.256 "type": "rebuild", 00:22:53.256 "target": "spare", 00:22:53.256 "progress": { 00:22:53.256 "blocks": 2816, 00:22:53.256 "percent": 35 00:22:53.256 } 00:22:53.256 }, 00:22:53.256 "base_bdevs_list": [ 00:22:53.256 { 00:22:53.256 "name": "spare", 00:22:53.256 "uuid": "174ade71-0081-5c04-bbad-3f298843b912", 00:22:53.256 "is_configured": true, 00:22:53.256 "data_offset": 256, 00:22:53.256 "data_size": 7936 00:22:53.256 }, 00:22:53.256 { 00:22:53.256 "name": "BaseBdev2", 00:22:53.256 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:22:53.256 "is_configured": true, 00:22:53.256 "data_offset": 256, 00:22:53.256 "data_size": 7936 00:22:53.256 } 00:22:53.256 ] 00:22:53.256 }' 00:22:53.256 18:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:53.256 18:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:53.256 18:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:53.256 18:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:53.256 18:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:53.517 [2024-07-24 18:26:01.947931] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:53.517 [2024-07-24 18:26:02.017736] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:53.517 [2024-07-24 18:26:02.017766] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.517 [2024-07-24 18:26:02.017776] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:53.517 [2024-07-24 18:26:02.017781] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:53.517 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:53.517 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:53.517 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:53.517 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:53.517 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:53.517 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:53.517 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:53.517 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:53.517 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:53.517 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:53.517 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.517 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.777 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:53.777 "name": "raid_bdev1", 00:22:53.777 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:22:53.777 "strip_size_kb": 0, 00:22:53.777 "state": "online", 00:22:53.777 "raid_level": "raid1", 00:22:53.777 "superblock": true, 00:22:53.777 "num_base_bdevs": 2, 00:22:53.777 "num_base_bdevs_discovered": 1, 00:22:53.777 "num_base_bdevs_operational": 1, 00:22:53.777 "base_bdevs_list": [ 00:22:53.777 { 00:22:53.777 "name": null, 00:22:53.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.777 "is_configured": false, 00:22:53.777 "data_offset": 256, 00:22:53.777 "data_size": 7936 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "name": "BaseBdev2", 00:22:53.777 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:22:53.777 "is_configured": true, 00:22:53.777 "data_offset": 256, 00:22:53.777 "data_size": 7936 00:22:53.777 } 00:22:53.777 ] 00:22:53.777 }' 00:22:53.777 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:53.777 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:54.345 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:54.345 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:54.345 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:54.345 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:54.345 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:54.345 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.345 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.345 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:54.345 "name": "raid_bdev1", 00:22:54.345 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:22:54.345 "strip_size_kb": 0, 00:22:54.345 "state": "online", 00:22:54.345 "raid_level": "raid1", 00:22:54.345 "superblock": true, 00:22:54.346 "num_base_bdevs": 2, 00:22:54.346 "num_base_bdevs_discovered": 1, 00:22:54.346 "num_base_bdevs_operational": 1, 00:22:54.346 "base_bdevs_list": [ 00:22:54.346 { 00:22:54.346 "name": null, 00:22:54.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.346 "is_configured": false, 00:22:54.346 "data_offset": 256, 00:22:54.346 "data_size": 7936 00:22:54.346 }, 00:22:54.346 { 00:22:54.346 "name": "BaseBdev2", 00:22:54.346 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:22:54.346 "is_configured": true, 00:22:54.346 "data_offset": 256, 00:22:54.346 "data_size": 7936 00:22:54.346 } 00:22:54.346 ] 00:22:54.346 }' 00:22:54.346 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:54.346 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:54.346 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:54.605 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:54.605 18:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:54.605 [2024-07-24 18:26:03.103421] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:54.605 [2024-07-24 18:26:03.105395] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x256ea30 00:22:54.605 [2024-07-24 18:26:03.106447] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:54.605 18:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:55.541 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:55.541 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:55.541 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:55.541 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:55.541 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:55.541 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.541 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:55.800 "name": "raid_bdev1", 00:22:55.800 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:22:55.800 "strip_size_kb": 0, 00:22:55.800 "state": "online", 00:22:55.800 "raid_level": "raid1", 00:22:55.800 "superblock": true, 00:22:55.800 "num_base_bdevs": 2, 00:22:55.800 "num_base_bdevs_discovered": 2, 00:22:55.800 "num_base_bdevs_operational": 2, 00:22:55.800 "process": { 00:22:55.800 "type": "rebuild", 00:22:55.800 "target": "spare", 00:22:55.800 "progress": { 00:22:55.800 "blocks": 2816, 00:22:55.800 "percent": 35 00:22:55.800 } 00:22:55.800 }, 00:22:55.800 "base_bdevs_list": [ 00:22:55.800 { 00:22:55.800 "name": "spare", 00:22:55.800 "uuid": "174ade71-0081-5c04-bbad-3f298843b912", 00:22:55.800 "is_configured": true, 00:22:55.800 "data_offset": 256, 00:22:55.800 "data_size": 7936 00:22:55.800 }, 00:22:55.800 { 00:22:55.800 "name": "BaseBdev2", 00:22:55.800 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:22:55.800 "is_configured": true, 00:22:55.800 "data_offset": 256, 00:22:55.800 "data_size": 7936 00:22:55.800 } 00:22:55.800 ] 00:22:55.800 }' 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:22:55.800 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=828 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:55.800 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:55.801 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.801 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.060 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:56.060 "name": "raid_bdev1", 00:22:56.060 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:22:56.060 "strip_size_kb": 0, 00:22:56.060 "state": "online", 00:22:56.060 "raid_level": "raid1", 00:22:56.060 "superblock": true, 00:22:56.060 "num_base_bdevs": 2, 00:22:56.060 "num_base_bdevs_discovered": 2, 00:22:56.060 "num_base_bdevs_operational": 2, 00:22:56.060 "process": { 00:22:56.060 "type": "rebuild", 00:22:56.060 "target": "spare", 00:22:56.060 "progress": { 00:22:56.060 "blocks": 3584, 00:22:56.060 "percent": 45 00:22:56.060 } 00:22:56.060 }, 00:22:56.060 "base_bdevs_list": [ 00:22:56.060 { 00:22:56.060 "name": "spare", 00:22:56.060 "uuid": "174ade71-0081-5c04-bbad-3f298843b912", 00:22:56.060 "is_configured": true, 00:22:56.060 "data_offset": 256, 00:22:56.060 "data_size": 7936 00:22:56.060 }, 00:22:56.060 { 00:22:56.060 "name": "BaseBdev2", 00:22:56.060 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:22:56.060 "is_configured": true, 00:22:56.060 "data_offset": 256, 00:22:56.060 "data_size": 7936 00:22:56.060 } 00:22:56.060 ] 00:22:56.060 }' 00:22:56.060 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:56.060 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:56.060 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:56.060 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:56.060 18:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:57.438 "name": "raid_bdev1", 00:22:57.438 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:22:57.438 "strip_size_kb": 0, 00:22:57.438 "state": "online", 00:22:57.438 "raid_level": "raid1", 00:22:57.438 "superblock": true, 00:22:57.438 "num_base_bdevs": 2, 00:22:57.438 "num_base_bdevs_discovered": 2, 00:22:57.438 "num_base_bdevs_operational": 2, 00:22:57.438 "process": { 00:22:57.438 "type": "rebuild", 00:22:57.438 "target": "spare", 00:22:57.438 "progress": { 00:22:57.438 "blocks": 6656, 00:22:57.438 "percent": 83 00:22:57.438 } 00:22:57.438 }, 00:22:57.438 "base_bdevs_list": [ 00:22:57.438 { 00:22:57.438 "name": "spare", 00:22:57.438 "uuid": "174ade71-0081-5c04-bbad-3f298843b912", 00:22:57.438 "is_configured": true, 00:22:57.438 "data_offset": 256, 00:22:57.438 "data_size": 7936 00:22:57.438 }, 00:22:57.438 { 00:22:57.438 "name": "BaseBdev2", 00:22:57.438 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:22:57.438 "is_configured": true, 00:22:57.438 "data_offset": 256, 00:22:57.438 "data_size": 7936 00:22:57.438 } 00:22:57.438 ] 00:22:57.438 }' 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.438 18:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:57.697 [2024-07-24 18:26:06.227999] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:57.697 [2024-07-24 18:26:06.228041] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:57.697 [2024-07-24 18:26:06.228117] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:58.634 18:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:58.635 18:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.635 18:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:58.635 18:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:58.635 18:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:58.635 18:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:58.635 18:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.635 18:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.635 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:58.635 "name": "raid_bdev1", 00:22:58.635 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:22:58.635 "strip_size_kb": 0, 00:22:58.635 "state": "online", 00:22:58.635 "raid_level": "raid1", 00:22:58.635 "superblock": true, 00:22:58.635 "num_base_bdevs": 2, 00:22:58.635 "num_base_bdevs_discovered": 2, 00:22:58.635 "num_base_bdevs_operational": 2, 00:22:58.635 "base_bdevs_list": [ 00:22:58.635 { 00:22:58.635 "name": "spare", 00:22:58.635 "uuid": "174ade71-0081-5c04-bbad-3f298843b912", 00:22:58.635 "is_configured": true, 00:22:58.635 "data_offset": 256, 00:22:58.635 "data_size": 7936 00:22:58.635 }, 00:22:58.635 { 00:22:58.635 "name": "BaseBdev2", 00:22:58.635 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:22:58.635 "is_configured": true, 00:22:58.635 "data_offset": 256, 00:22:58.635 "data_size": 7936 00:22:58.635 } 00:22:58.635 ] 00:22:58.635 }' 00:22:58.635 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:58.635 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:58.635 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:58.635 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:58.635 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:22:58.635 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:58.635 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:58.635 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:58.635 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:58.635 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:58.635 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.635 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.894 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:58.894 "name": "raid_bdev1", 00:22:58.894 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:22:58.894 "strip_size_kb": 0, 00:22:58.894 "state": "online", 00:22:58.894 "raid_level": "raid1", 00:22:58.894 "superblock": true, 00:22:58.894 "num_base_bdevs": 2, 00:22:58.894 "num_base_bdevs_discovered": 2, 00:22:58.894 "num_base_bdevs_operational": 2, 00:22:58.894 "base_bdevs_list": [ 00:22:58.894 { 00:22:58.894 "name": "spare", 00:22:58.894 "uuid": "174ade71-0081-5c04-bbad-3f298843b912", 00:22:58.894 "is_configured": true, 00:22:58.894 "data_offset": 256, 00:22:58.894 "data_size": 7936 00:22:58.894 }, 00:22:58.894 { 00:22:58.894 "name": "BaseBdev2", 00:22:58.894 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:22:58.894 "is_configured": true, 00:22:58.894 "data_offset": 256, 00:22:58.894 "data_size": 7936 00:22:58.894 } 00:22:58.894 ] 00:22:58.895 }' 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.895 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.154 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:59.154 "name": "raid_bdev1", 00:22:59.154 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:22:59.154 "strip_size_kb": 0, 00:22:59.154 "state": "online", 00:22:59.154 "raid_level": "raid1", 00:22:59.154 "superblock": true, 00:22:59.154 "num_base_bdevs": 2, 00:22:59.154 "num_base_bdevs_discovered": 2, 00:22:59.154 "num_base_bdevs_operational": 2, 00:22:59.154 "base_bdevs_list": [ 00:22:59.154 { 00:22:59.154 "name": "spare", 00:22:59.154 "uuid": "174ade71-0081-5c04-bbad-3f298843b912", 00:22:59.154 "is_configured": true, 00:22:59.154 "data_offset": 256, 00:22:59.154 "data_size": 7936 00:22:59.154 }, 00:22:59.154 { 00:22:59.154 "name": "BaseBdev2", 00:22:59.154 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:22:59.154 "is_configured": true, 00:22:59.154 "data_offset": 256, 00:22:59.154 "data_size": 7936 00:22:59.154 } 00:22:59.154 ] 00:22:59.154 }' 00:22:59.154 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:59.154 18:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:59.722 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:59.723 [2024-07-24 18:26:08.256207] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:59.723 [2024-07-24 18:26:08.256228] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:59.723 [2024-07-24 18:26:08.256273] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:59.723 [2024-07-24 18:26:08.256313] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:59.723 [2024-07-24 18:26:08.256320] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x256de70 name raid_bdev1, state offline 00:22:59.723 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.723 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:22:59.982 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:59.982 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:59.982 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:59.982 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:59.982 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:59.982 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:59.982 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:59.982 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:59.982 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:59.982 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:59.982 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:59.982 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:59.982 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:00.242 /dev/nbd0 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:00.242 1+0 records in 00:23:00.242 1+0 records out 00:23:00.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207654 s, 19.7 MB/s 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:00.242 /dev/nbd1 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:00.242 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:00.501 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:23:00.501 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:00.501 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:00.501 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:00.501 1+0 records in 00:23:00.501 1+0 records out 00:23:00.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290058 s, 14.1 MB/s 00:23:00.501 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:00.501 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:23:00.501 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:00.501 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:00.501 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:23:00.501 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:00.501 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:00.501 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:00.501 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:00.502 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:00.502 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:00.502 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:00.502 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:23:00.502 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:00.502 18:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:23:00.761 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:01.020 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:01.280 [2024-07-24 18:26:09.645634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:01.280 [2024-07-24 18:26:09.645685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.280 [2024-07-24 18:26:09.645697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x256fb80 00:23:01.280 [2024-07-24 18:26:09.645706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.280 [2024-07-24 18:26:09.646767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.280 [2024-07-24 18:26:09.646788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:01.280 [2024-07-24 18:26:09.646831] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:01.280 [2024-07-24 18:26:09.646849] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:01.280 [2024-07-24 18:26:09.646911] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:01.280 spare 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.280 [2024-07-24 18:26:09.747199] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x256f120 00:23:01.280 [2024-07-24 18:26:09.747213] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:01.280 [2024-07-24 18:26:09.747263] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x256f400 00:23:01.280 [2024-07-24 18:26:09.747351] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x256f120 00:23:01.280 [2024-07-24 18:26:09.747357] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x256f120 00:23:01.280 [2024-07-24 18:26:09.747410] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:01.280 "name": "raid_bdev1", 00:23:01.280 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:23:01.280 "strip_size_kb": 0, 00:23:01.280 "state": "online", 00:23:01.280 "raid_level": "raid1", 00:23:01.280 "superblock": true, 00:23:01.280 "num_base_bdevs": 2, 00:23:01.280 "num_base_bdevs_discovered": 2, 00:23:01.280 "num_base_bdevs_operational": 2, 00:23:01.280 "base_bdevs_list": [ 00:23:01.280 { 00:23:01.280 "name": "spare", 00:23:01.280 "uuid": "174ade71-0081-5c04-bbad-3f298843b912", 00:23:01.280 "is_configured": true, 00:23:01.280 "data_offset": 256, 00:23:01.280 "data_size": 7936 00:23:01.280 }, 00:23:01.280 { 00:23:01.280 "name": "BaseBdev2", 00:23:01.280 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:23:01.280 "is_configured": true, 00:23:01.280 "data_offset": 256, 00:23:01.280 "data_size": 7936 00:23:01.280 } 00:23:01.280 ] 00:23:01.280 }' 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:01.280 18:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:01.849 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:01.849 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:01.849 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:01.849 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:01.849 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:01.849 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.849 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.108 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:02.108 "name": "raid_bdev1", 00:23:02.108 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:23:02.108 "strip_size_kb": 0, 00:23:02.108 "state": "online", 00:23:02.108 "raid_level": "raid1", 00:23:02.108 "superblock": true, 00:23:02.108 "num_base_bdevs": 2, 00:23:02.108 "num_base_bdevs_discovered": 2, 00:23:02.108 "num_base_bdevs_operational": 2, 00:23:02.108 "base_bdevs_list": [ 00:23:02.108 { 00:23:02.108 "name": "spare", 00:23:02.108 "uuid": "174ade71-0081-5c04-bbad-3f298843b912", 00:23:02.108 "is_configured": true, 00:23:02.108 "data_offset": 256, 00:23:02.108 "data_size": 7936 00:23:02.108 }, 00:23:02.108 { 00:23:02.108 "name": "BaseBdev2", 00:23:02.108 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:23:02.108 "is_configured": true, 00:23:02.108 "data_offset": 256, 00:23:02.108 "data_size": 7936 00:23:02.108 } 00:23:02.108 ] 00:23:02.108 }' 00:23:02.108 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:02.108 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:02.108 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:02.108 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:02.108 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.108 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:02.368 [2024-07-24 18:26:10.932995] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.368 18:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.628 18:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.628 "name": "raid_bdev1", 00:23:02.628 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:23:02.628 "strip_size_kb": 0, 00:23:02.628 "state": "online", 00:23:02.628 "raid_level": "raid1", 00:23:02.628 "superblock": true, 00:23:02.628 "num_base_bdevs": 2, 00:23:02.628 "num_base_bdevs_discovered": 1, 00:23:02.628 "num_base_bdevs_operational": 1, 00:23:02.628 "base_bdevs_list": [ 00:23:02.628 { 00:23:02.628 "name": null, 00:23:02.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.628 "is_configured": false, 00:23:02.628 "data_offset": 256, 00:23:02.628 "data_size": 7936 00:23:02.628 }, 00:23:02.628 { 00:23:02.628 "name": "BaseBdev2", 00:23:02.628 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:23:02.628 "is_configured": true, 00:23:02.628 "data_offset": 256, 00:23:02.628 "data_size": 7936 00:23:02.628 } 00:23:02.628 ] 00:23:02.628 }' 00:23:02.628 18:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.628 18:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:03.197 18:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:03.197 [2024-07-24 18:26:11.731061] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:03.197 [2024-07-24 18:26:11.731162] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:03.197 [2024-07-24 18:26:11.731173] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:03.197 [2024-07-24 18:26:11.731192] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:03.197 [2024-07-24 18:26:11.733095] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x256f400 00:23:03.197 [2024-07-24 18:26:11.734109] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:03.197 18:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:23:04.576 18:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:04.576 18:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:04.576 18:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:04.576 18:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:04.576 18:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:04.576 18:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.576 18:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.576 18:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:04.576 "name": "raid_bdev1", 00:23:04.576 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:23:04.576 "strip_size_kb": 0, 00:23:04.576 "state": "online", 00:23:04.576 "raid_level": "raid1", 00:23:04.576 "superblock": true, 00:23:04.576 "num_base_bdevs": 2, 00:23:04.576 "num_base_bdevs_discovered": 2, 00:23:04.576 "num_base_bdevs_operational": 2, 00:23:04.576 "process": { 00:23:04.576 "type": "rebuild", 00:23:04.576 "target": "spare", 00:23:04.576 "progress": { 00:23:04.576 "blocks": 2816, 00:23:04.576 "percent": 35 00:23:04.576 } 00:23:04.576 }, 00:23:04.576 "base_bdevs_list": [ 00:23:04.576 { 00:23:04.576 "name": "spare", 00:23:04.576 "uuid": "174ade71-0081-5c04-bbad-3f298843b912", 00:23:04.576 "is_configured": true, 00:23:04.576 "data_offset": 256, 00:23:04.576 "data_size": 7936 00:23:04.576 }, 00:23:04.576 { 00:23:04.576 "name": "BaseBdev2", 00:23:04.576 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:23:04.576 "is_configured": true, 00:23:04.576 "data_offset": 256, 00:23:04.576 "data_size": 7936 00:23:04.576 } 00:23:04.576 ] 00:23:04.576 }' 00:23:04.576 18:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:04.576 18:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:04.576 18:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:04.576 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:04.576 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:04.576 [2024-07-24 18:26:13.163554] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:04.835 [2024-07-24 18:26:13.244561] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:04.835 [2024-07-24 18:26:13.244591] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.835 [2024-07-24 18:26:13.244601] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:04.835 [2024-07-24 18:26:13.244606] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:04.835 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:04.835 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:04.835 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:04.835 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:04.835 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:04.835 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:04.835 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.835 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.835 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.835 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.836 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.836 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.094 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:05.094 "name": "raid_bdev1", 00:23:05.094 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:23:05.094 "strip_size_kb": 0, 00:23:05.094 "state": "online", 00:23:05.094 "raid_level": "raid1", 00:23:05.094 "superblock": true, 00:23:05.094 "num_base_bdevs": 2, 00:23:05.094 "num_base_bdevs_discovered": 1, 00:23:05.094 "num_base_bdevs_operational": 1, 00:23:05.094 "base_bdevs_list": [ 00:23:05.094 { 00:23:05.094 "name": null, 00:23:05.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.094 "is_configured": false, 00:23:05.094 "data_offset": 256, 00:23:05.094 "data_size": 7936 00:23:05.094 }, 00:23:05.094 { 00:23:05.094 "name": "BaseBdev2", 00:23:05.094 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:23:05.094 "is_configured": true, 00:23:05.094 "data_offset": 256, 00:23:05.094 "data_size": 7936 00:23:05.094 } 00:23:05.094 ] 00:23:05.094 }' 00:23:05.094 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:05.094 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:05.353 18:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:05.612 [2024-07-24 18:26:14.074482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:05.612 [2024-07-24 18:26:14.074519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.612 [2024-07-24 18:26:14.074534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x256e370 00:23:05.612 [2024-07-24 18:26:14.074543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.612 [2024-07-24 18:26:14.074715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.612 [2024-07-24 18:26:14.074727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:05.612 [2024-07-24 18:26:14.074768] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:05.612 [2024-07-24 18:26:14.074776] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:05.612 [2024-07-24 18:26:14.074783] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:05.612 [2024-07-24 18:26:14.074796] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:05.612 [2024-07-24 18:26:14.076706] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2572710 00:23:05.612 [2024-07-24 18:26:14.077674] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:05.612 spare 00:23:05.612 18:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:23:06.571 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.571 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:06.571 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:06.571 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:06.571 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:06.571 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.571 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.830 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:06.830 "name": "raid_bdev1", 00:23:06.830 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:23:06.830 "strip_size_kb": 0, 00:23:06.830 "state": "online", 00:23:06.830 "raid_level": "raid1", 00:23:06.830 "superblock": true, 00:23:06.830 "num_base_bdevs": 2, 00:23:06.830 "num_base_bdevs_discovered": 2, 00:23:06.830 "num_base_bdevs_operational": 2, 00:23:06.830 "process": { 00:23:06.830 "type": "rebuild", 00:23:06.830 "target": "spare", 00:23:06.830 "progress": { 00:23:06.830 "blocks": 2816, 00:23:06.830 "percent": 35 00:23:06.830 } 00:23:06.830 }, 00:23:06.830 "base_bdevs_list": [ 00:23:06.830 { 00:23:06.830 "name": "spare", 00:23:06.830 "uuid": "174ade71-0081-5c04-bbad-3f298843b912", 00:23:06.830 "is_configured": true, 00:23:06.830 "data_offset": 256, 00:23:06.830 "data_size": 7936 00:23:06.830 }, 00:23:06.830 { 00:23:06.830 "name": "BaseBdev2", 00:23:06.830 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:23:06.830 "is_configured": true, 00:23:06.830 "data_offset": 256, 00:23:06.830 "data_size": 7936 00:23:06.830 } 00:23:06.830 ] 00:23:06.830 }' 00:23:06.830 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:06.830 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:06.830 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:06.830 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.830 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:07.089 [2024-07-24 18:26:15.490653] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:07.089 [2024-07-24 18:26:15.588088] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:07.089 [2024-07-24 18:26:15.588119] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.089 [2024-07-24 18:26:15.588129] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:07.089 [2024-07-24 18:26:15.588134] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:07.089 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:07.089 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:07.089 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:07.089 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:07.089 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:07.089 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:07.089 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.089 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.089 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.089 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.089 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.089 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.347 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.347 "name": "raid_bdev1", 00:23:07.347 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:23:07.347 "strip_size_kb": 0, 00:23:07.347 "state": "online", 00:23:07.347 "raid_level": "raid1", 00:23:07.347 "superblock": true, 00:23:07.347 "num_base_bdevs": 2, 00:23:07.347 "num_base_bdevs_discovered": 1, 00:23:07.347 "num_base_bdevs_operational": 1, 00:23:07.347 "base_bdevs_list": [ 00:23:07.347 { 00:23:07.347 "name": null, 00:23:07.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.347 "is_configured": false, 00:23:07.347 "data_offset": 256, 00:23:07.347 "data_size": 7936 00:23:07.347 }, 00:23:07.347 { 00:23:07.347 "name": "BaseBdev2", 00:23:07.347 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:23:07.347 "is_configured": true, 00:23:07.347 "data_offset": 256, 00:23:07.347 "data_size": 7936 00:23:07.347 } 00:23:07.347 ] 00:23:07.347 }' 00:23:07.347 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.347 18:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.914 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:07.914 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:07.914 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:07.914 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:07.914 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:07.914 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.914 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.914 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:07.914 "name": "raid_bdev1", 00:23:07.914 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:23:07.914 "strip_size_kb": 0, 00:23:07.914 "state": "online", 00:23:07.914 "raid_level": "raid1", 00:23:07.914 "superblock": true, 00:23:07.914 "num_base_bdevs": 2, 00:23:07.914 "num_base_bdevs_discovered": 1, 00:23:07.914 "num_base_bdevs_operational": 1, 00:23:07.914 "base_bdevs_list": [ 00:23:07.914 { 00:23:07.914 "name": null, 00:23:07.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.914 "is_configured": false, 00:23:07.914 "data_offset": 256, 00:23:07.914 "data_size": 7936 00:23:07.914 }, 00:23:07.914 { 00:23:07.914 "name": "BaseBdev2", 00:23:07.914 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:23:07.914 "is_configured": true, 00:23:07.914 "data_offset": 256, 00:23:07.914 "data_size": 7936 00:23:07.914 } 00:23:07.914 ] 00:23:07.914 }' 00:23:07.914 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:07.914 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:07.914 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:08.171 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:08.171 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:08.171 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:08.429 [2024-07-24 18:26:16.850156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:08.429 [2024-07-24 18:26:16.850191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.429 [2024-07-24 18:26:16.850209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25776b0 00:23:08.429 [2024-07-24 18:26:16.850217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.429 [2024-07-24 18:26:16.850365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.429 [2024-07-24 18:26:16.850375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:08.429 [2024-07-24 18:26:16.850406] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:08.429 [2024-07-24 18:26:16.850413] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:08.429 [2024-07-24 18:26:16.850420] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:08.429 BaseBdev1 00:23:08.429 18:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:23:09.365 18:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:09.365 18:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:09.365 18:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:09.365 18:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.365 18:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.365 18:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:09.365 18:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.365 18:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.365 18:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.365 18:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.365 18:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.365 18:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.623 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.623 "name": "raid_bdev1", 00:23:09.623 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:23:09.623 "strip_size_kb": 0, 00:23:09.623 "state": "online", 00:23:09.623 "raid_level": "raid1", 00:23:09.623 "superblock": true, 00:23:09.623 "num_base_bdevs": 2, 00:23:09.623 "num_base_bdevs_discovered": 1, 00:23:09.623 "num_base_bdevs_operational": 1, 00:23:09.623 "base_bdevs_list": [ 00:23:09.623 { 00:23:09.623 "name": null, 00:23:09.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.623 "is_configured": false, 00:23:09.623 "data_offset": 256, 00:23:09.623 "data_size": 7936 00:23:09.623 }, 00:23:09.623 { 00:23:09.623 "name": "BaseBdev2", 00:23:09.623 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:23:09.623 "is_configured": true, 00:23:09.623 "data_offset": 256, 00:23:09.623 "data_size": 7936 00:23:09.623 } 00:23:09.623 ] 00:23:09.623 }' 00:23:09.623 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.623 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:10.190 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:10.190 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:10.190 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:10.190 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:10.190 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:10.190 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.190 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.190 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:10.190 "name": "raid_bdev1", 00:23:10.190 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:23:10.190 "strip_size_kb": 0, 00:23:10.190 "state": "online", 00:23:10.190 "raid_level": "raid1", 00:23:10.190 "superblock": true, 00:23:10.190 "num_base_bdevs": 2, 00:23:10.190 "num_base_bdevs_discovered": 1, 00:23:10.190 "num_base_bdevs_operational": 1, 00:23:10.190 "base_bdevs_list": [ 00:23:10.190 { 00:23:10.190 "name": null, 00:23:10.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.190 "is_configured": false, 00:23:10.190 "data_offset": 256, 00:23:10.190 "data_size": 7936 00:23:10.190 }, 00:23:10.190 { 00:23:10.190 "name": "BaseBdev2", 00:23:10.190 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:23:10.190 "is_configured": true, 00:23:10.190 "data_offset": 256, 00:23:10.190 "data_size": 7936 00:23:10.190 } 00:23:10.190 ] 00:23:10.190 }' 00:23:10.190 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:10.190 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:10.190 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:10.448 [2024-07-24 18:26:18.955608] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:10.448 [2024-07-24 18:26:18.955712] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:10.448 [2024-07-24 18:26:18.955723] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:10.448 request: 00:23:10.448 { 00:23:10.448 "base_bdev": "BaseBdev1", 00:23:10.448 "raid_bdev": "raid_bdev1", 00:23:10.448 "method": "bdev_raid_add_base_bdev", 00:23:10.448 "req_id": 1 00:23:10.448 } 00:23:10.448 Got JSON-RPC error response 00:23:10.448 response: 00:23:10.448 { 00:23:10.448 "code": -22, 00:23:10.448 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:10.448 } 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:10.448 18:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:23:11.409 18:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:11.409 18:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:11.409 18:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:11.409 18:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:11.409 18:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:11.409 18:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:11.409 18:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:11.409 18:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:11.409 18:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:11.409 18:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:11.409 18:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.409 18:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.668 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.668 "name": "raid_bdev1", 00:23:11.668 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:23:11.668 "strip_size_kb": 0, 00:23:11.668 "state": "online", 00:23:11.668 "raid_level": "raid1", 00:23:11.668 "superblock": true, 00:23:11.668 "num_base_bdevs": 2, 00:23:11.668 "num_base_bdevs_discovered": 1, 00:23:11.668 "num_base_bdevs_operational": 1, 00:23:11.668 "base_bdevs_list": [ 00:23:11.668 { 00:23:11.668 "name": null, 00:23:11.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.668 "is_configured": false, 00:23:11.668 "data_offset": 256, 00:23:11.668 "data_size": 7936 00:23:11.668 }, 00:23:11.668 { 00:23:11.668 "name": "BaseBdev2", 00:23:11.668 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:23:11.668 "is_configured": true, 00:23:11.668 "data_offset": 256, 00:23:11.668 "data_size": 7936 00:23:11.668 } 00:23:11.668 ] 00:23:11.668 }' 00:23:11.668 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.668 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.235 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:12.235 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:12.235 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:12.235 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:12.235 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:12.235 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.235 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.235 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:12.235 "name": "raid_bdev1", 00:23:12.235 "uuid": "1b93de3c-489c-4301-a2a1-6a5a2cd538fa", 00:23:12.235 "strip_size_kb": 0, 00:23:12.235 "state": "online", 00:23:12.235 "raid_level": "raid1", 00:23:12.235 "superblock": true, 00:23:12.235 "num_base_bdevs": 2, 00:23:12.235 "num_base_bdevs_discovered": 1, 00:23:12.235 "num_base_bdevs_operational": 1, 00:23:12.235 "base_bdevs_list": [ 00:23:12.235 { 00:23:12.235 "name": null, 00:23:12.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.235 "is_configured": false, 00:23:12.235 "data_offset": 256, 00:23:12.235 "data_size": 7936 00:23:12.235 }, 00:23:12.235 { 00:23:12.235 "name": "BaseBdev2", 00:23:12.235 "uuid": "9dd952f7-d534-5c13-a670-93d523408043", 00:23:12.235 "is_configured": true, 00:23:12.235 "data_offset": 256, 00:23:12.235 "data_size": 7936 00:23:12.235 } 00:23:12.235 ] 00:23:12.235 }' 00:23:12.235 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 2304439 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 2304439 ']' 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 2304439 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2304439 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2304439' 00:23:12.495 killing process with pid 2304439 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 2304439 00:23:12.495 Received shutdown signal, test time was about 60.000000 seconds 00:23:12.495 00:23:12.495 Latency(us) 00:23:12.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.495 =================================================================================================================== 00:23:12.495 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:12.495 [2024-07-24 18:26:20.938856] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:12.495 [2024-07-24 18:26:20.938920] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:12.495 [2024-07-24 18:26:20.938952] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:12.495 [2024-07-24 18:26:20.938959] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x256f120 name raid_bdev1, state offline 00:23:12.495 18:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 2304439 00:23:12.495 [2024-07-24 18:26:20.966504] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:12.755 18:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:23:12.755 00:23:12.755 real 0m25.674s 00:23:12.755 user 0m38.650s 00:23:12.755 sys 0m4.092s 00:23:12.755 18:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:12.755 18:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:12.755 ************************************ 00:23:12.755 END TEST raid_rebuild_test_sb_md_separate 00:23:12.755 ************************************ 00:23:12.755 18:26:21 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:23:12.755 18:26:21 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:23:12.755 18:26:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:12.755 18:26:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:12.755 18:26:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:12.755 ************************************ 00:23:12.755 START TEST raid_state_function_test_sb_md_interleaved 00:23:12.755 ************************************ 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=2309182 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 2309182' 00:23:12.755 Process raid pid: 2309182 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 2309182 /var/tmp/spdk-raid.sock 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 2309182 ']' 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:12.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:12.755 18:26:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:12.755 [2024-07-24 18:26:21.276487] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:23:12.755 [2024-07-24 18:26:21.276530] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:01.0 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:01.1 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:01.2 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:01.3 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:01.4 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:01.5 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:01.6 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:01.7 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:02.0 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:02.1 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:02.2 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:02.3 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:02.4 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:02.5 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:02.6 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b3:02.7 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b5:01.0 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b5:01.1 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b5:01.2 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b5:01.3 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b5:01.4 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b5:01.5 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.755 EAL: Requested device 0000:b5:01.6 cannot be used 00:23:12.755 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.756 EAL: Requested device 0000:b5:01.7 cannot be used 00:23:12.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.756 EAL: Requested device 0000:b5:02.0 cannot be used 00:23:12.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.756 EAL: Requested device 0000:b5:02.1 cannot be used 00:23:12.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.756 EAL: Requested device 0000:b5:02.2 cannot be used 00:23:12.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.756 EAL: Requested device 0000:b5:02.3 cannot be used 00:23:12.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.756 EAL: Requested device 0000:b5:02.4 cannot be used 00:23:12.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.756 EAL: Requested device 0000:b5:02.5 cannot be used 00:23:12.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.756 EAL: Requested device 0000:b5:02.6 cannot be used 00:23:12.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:12.756 EAL: Requested device 0000:b5:02.7 cannot be used 00:23:13.015 [2024-07-24 18:26:21.369841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.015 [2024-07-24 18:26:21.440697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.015 [2024-07-24 18:26:21.491908] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:13.015 [2024-07-24 18:26:21.491937] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:13.584 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:13.584 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:23:13.584 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:13.843 [2024-07-24 18:26:22.243166] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:13.843 [2024-07-24 18:26:22.243197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:13.843 [2024-07-24 18:26:22.243203] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:13.843 [2024-07-24 18:26:22.243210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:13.843 "name": "Existed_Raid", 00:23:13.843 "uuid": "0c8224cf-6462-4b93-a662-8adaaad68b22", 00:23:13.843 "strip_size_kb": 0, 00:23:13.843 "state": "configuring", 00:23:13.843 "raid_level": "raid1", 00:23:13.843 "superblock": true, 00:23:13.843 "num_base_bdevs": 2, 00:23:13.843 "num_base_bdevs_discovered": 0, 00:23:13.843 "num_base_bdevs_operational": 2, 00:23:13.843 "base_bdevs_list": [ 00:23:13.843 { 00:23:13.843 "name": "BaseBdev1", 00:23:13.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.843 "is_configured": false, 00:23:13.843 "data_offset": 0, 00:23:13.843 "data_size": 0 00:23:13.843 }, 00:23:13.843 { 00:23:13.843 "name": "BaseBdev2", 00:23:13.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.843 "is_configured": false, 00:23:13.843 "data_offset": 0, 00:23:13.843 "data_size": 0 00:23:13.843 } 00:23:13.843 ] 00:23:13.843 }' 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:13.843 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.412 18:26:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:14.672 [2024-07-24 18:26:23.077224] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:14.672 [2024-07-24 18:26:23.077242] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25d11a0 name Existed_Raid, state configuring 00:23:14.672 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:14.672 [2024-07-24 18:26:23.245678] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:14.672 [2024-07-24 18:26:23.245696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:14.672 [2024-07-24 18:26:23.245703] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:14.672 [2024-07-24 18:26:23.245710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:14.672 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:23:14.933 [2024-07-24 18:26:23.410603] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:14.933 BaseBdev1 00:23:14.933 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:14.933 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:14.933 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:14.933 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:23:14.933 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:14.933 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:14.933 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:15.193 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:15.193 [ 00:23:15.193 { 00:23:15.194 "name": "BaseBdev1", 00:23:15.194 "aliases": [ 00:23:15.194 "5506d1ac-5f17-402e-b06c-f40b58cbba90" 00:23:15.194 ], 00:23:15.194 "product_name": "Malloc disk", 00:23:15.194 "block_size": 4128, 00:23:15.194 "num_blocks": 8192, 00:23:15.194 "uuid": "5506d1ac-5f17-402e-b06c-f40b58cbba90", 00:23:15.194 "md_size": 32, 00:23:15.194 "md_interleave": true, 00:23:15.194 "dif_type": 0, 00:23:15.194 "assigned_rate_limits": { 00:23:15.194 "rw_ios_per_sec": 0, 00:23:15.194 "rw_mbytes_per_sec": 0, 00:23:15.194 "r_mbytes_per_sec": 0, 00:23:15.194 "w_mbytes_per_sec": 0 00:23:15.194 }, 00:23:15.194 "claimed": true, 00:23:15.194 "claim_type": "exclusive_write", 00:23:15.194 "zoned": false, 00:23:15.194 "supported_io_types": { 00:23:15.194 "read": true, 00:23:15.194 "write": true, 00:23:15.194 "unmap": true, 00:23:15.194 "flush": true, 00:23:15.194 "reset": true, 00:23:15.194 "nvme_admin": false, 00:23:15.194 "nvme_io": false, 00:23:15.194 "nvme_io_md": false, 00:23:15.194 "write_zeroes": true, 00:23:15.194 "zcopy": true, 00:23:15.194 "get_zone_info": false, 00:23:15.194 "zone_management": false, 00:23:15.194 "zone_append": false, 00:23:15.194 "compare": false, 00:23:15.194 "compare_and_write": false, 00:23:15.194 "abort": true, 00:23:15.194 "seek_hole": false, 00:23:15.194 "seek_data": false, 00:23:15.194 "copy": true, 00:23:15.194 "nvme_iov_md": false 00:23:15.194 }, 00:23:15.194 "memory_domains": [ 00:23:15.194 { 00:23:15.194 "dma_device_id": "system", 00:23:15.194 "dma_device_type": 1 00:23:15.194 }, 00:23:15.194 { 00:23:15.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.194 "dma_device_type": 2 00:23:15.194 } 00:23:15.194 ], 00:23:15.194 "driver_specific": {} 00:23:15.194 } 00:23:15.194 ] 00:23:15.194 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:23:15.194 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:15.194 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:15.194 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:15.194 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:15.194 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:15.194 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:15.194 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:15.194 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:15.194 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:15.194 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:15.194 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.194 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.453 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:15.453 "name": "Existed_Raid", 00:23:15.453 "uuid": "8e50e4c3-a27b-4760-9655-ca4f984e893a", 00:23:15.453 "strip_size_kb": 0, 00:23:15.453 "state": "configuring", 00:23:15.453 "raid_level": "raid1", 00:23:15.453 "superblock": true, 00:23:15.453 "num_base_bdevs": 2, 00:23:15.453 "num_base_bdevs_discovered": 1, 00:23:15.453 "num_base_bdevs_operational": 2, 00:23:15.453 "base_bdevs_list": [ 00:23:15.453 { 00:23:15.453 "name": "BaseBdev1", 00:23:15.453 "uuid": "5506d1ac-5f17-402e-b06c-f40b58cbba90", 00:23:15.453 "is_configured": true, 00:23:15.453 "data_offset": 256, 00:23:15.453 "data_size": 7936 00:23:15.453 }, 00:23:15.453 { 00:23:15.453 "name": "BaseBdev2", 00:23:15.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.453 "is_configured": false, 00:23:15.453 "data_offset": 0, 00:23:15.453 "data_size": 0 00:23:15.453 } 00:23:15.453 ] 00:23:15.453 }' 00:23:15.453 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:15.453 18:26:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:16.022 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:16.022 [2024-07-24 18:26:24.549549] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:16.022 [2024-07-24 18:26:24.549574] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25d0a90 name Existed_Raid, state configuring 00:23:16.022 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:16.281 [2024-07-24 18:26:24.730043] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:16.281 [2024-07-24 18:26:24.731111] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:16.281 [2024-07-24 18:26:24.731136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.281 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.541 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:16.541 "name": "Existed_Raid", 00:23:16.541 "uuid": "8f2eba04-9c24-4e4f-a989-4e9f7e844076", 00:23:16.541 "strip_size_kb": 0, 00:23:16.541 "state": "configuring", 00:23:16.541 "raid_level": "raid1", 00:23:16.541 "superblock": true, 00:23:16.541 "num_base_bdevs": 2, 00:23:16.541 "num_base_bdevs_discovered": 1, 00:23:16.541 "num_base_bdevs_operational": 2, 00:23:16.541 "base_bdevs_list": [ 00:23:16.541 { 00:23:16.541 "name": "BaseBdev1", 00:23:16.541 "uuid": "5506d1ac-5f17-402e-b06c-f40b58cbba90", 00:23:16.541 "is_configured": true, 00:23:16.541 "data_offset": 256, 00:23:16.541 "data_size": 7936 00:23:16.541 }, 00:23:16.541 { 00:23:16.541 "name": "BaseBdev2", 00:23:16.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.541 "is_configured": false, 00:23:16.541 "data_offset": 0, 00:23:16.541 "data_size": 0 00:23:16.541 } 00:23:16.541 ] 00:23:16.541 }' 00:23:16.541 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:16.541 18:26:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:17.110 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:23:17.110 [2024-07-24 18:26:25.563085] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:17.110 [2024-07-24 18:26:25.563177] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2652990 00:23:17.110 [2024-07-24 18:26:25.563185] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:17.110 [2024-07-24 18:26:25.563242] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x27641e0 00:23:17.110 [2024-07-24 18:26:25.563296] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2652990 00:23:17.110 [2024-07-24 18:26:25.563302] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2652990 00:23:17.110 [2024-07-24 18:26:25.563341] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.110 BaseBdev2 00:23:17.110 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:17.110 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:17.110 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:17.110 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:23:17.110 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:17.110 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:17.110 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:17.370 [ 00:23:17.370 { 00:23:17.370 "name": "BaseBdev2", 00:23:17.370 "aliases": [ 00:23:17.370 "b46aa7e4-f511-4a11-ae7d-3517dabdea58" 00:23:17.370 ], 00:23:17.370 "product_name": "Malloc disk", 00:23:17.370 "block_size": 4128, 00:23:17.370 "num_blocks": 8192, 00:23:17.370 "uuid": "b46aa7e4-f511-4a11-ae7d-3517dabdea58", 00:23:17.370 "md_size": 32, 00:23:17.370 "md_interleave": true, 00:23:17.370 "dif_type": 0, 00:23:17.370 "assigned_rate_limits": { 00:23:17.370 "rw_ios_per_sec": 0, 00:23:17.370 "rw_mbytes_per_sec": 0, 00:23:17.370 "r_mbytes_per_sec": 0, 00:23:17.370 "w_mbytes_per_sec": 0 00:23:17.370 }, 00:23:17.370 "claimed": true, 00:23:17.370 "claim_type": "exclusive_write", 00:23:17.370 "zoned": false, 00:23:17.370 "supported_io_types": { 00:23:17.370 "read": true, 00:23:17.370 "write": true, 00:23:17.370 "unmap": true, 00:23:17.370 "flush": true, 00:23:17.370 "reset": true, 00:23:17.370 "nvme_admin": false, 00:23:17.370 "nvme_io": false, 00:23:17.370 "nvme_io_md": false, 00:23:17.370 "write_zeroes": true, 00:23:17.370 "zcopy": true, 00:23:17.370 "get_zone_info": false, 00:23:17.370 "zone_management": false, 00:23:17.370 "zone_append": false, 00:23:17.370 "compare": false, 00:23:17.370 "compare_and_write": false, 00:23:17.370 "abort": true, 00:23:17.370 "seek_hole": false, 00:23:17.370 "seek_data": false, 00:23:17.370 "copy": true, 00:23:17.370 "nvme_iov_md": false 00:23:17.370 }, 00:23:17.370 "memory_domains": [ 00:23:17.370 { 00:23:17.370 "dma_device_id": "system", 00:23:17.370 "dma_device_type": 1 00:23:17.370 }, 00:23:17.370 { 00:23:17.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.370 "dma_device_type": 2 00:23:17.370 } 00:23:17.370 ], 00:23:17.370 "driver_specific": {} 00:23:17.370 } 00:23:17.370 ] 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.370 18:26:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.630 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:17.630 "name": "Existed_Raid", 00:23:17.630 "uuid": "8f2eba04-9c24-4e4f-a989-4e9f7e844076", 00:23:17.630 "strip_size_kb": 0, 00:23:17.630 "state": "online", 00:23:17.630 "raid_level": "raid1", 00:23:17.630 "superblock": true, 00:23:17.630 "num_base_bdevs": 2, 00:23:17.630 "num_base_bdevs_discovered": 2, 00:23:17.630 "num_base_bdevs_operational": 2, 00:23:17.630 "base_bdevs_list": [ 00:23:17.630 { 00:23:17.630 "name": "BaseBdev1", 00:23:17.630 "uuid": "5506d1ac-5f17-402e-b06c-f40b58cbba90", 00:23:17.630 "is_configured": true, 00:23:17.630 "data_offset": 256, 00:23:17.630 "data_size": 7936 00:23:17.630 }, 00:23:17.630 { 00:23:17.630 "name": "BaseBdev2", 00:23:17.630 "uuid": "b46aa7e4-f511-4a11-ae7d-3517dabdea58", 00:23:17.630 "is_configured": true, 00:23:17.630 "data_offset": 256, 00:23:17.630 "data_size": 7936 00:23:17.630 } 00:23:17.630 ] 00:23:17.630 }' 00:23:17.630 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:17.630 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:18.198 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:18.198 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:18.198 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:18.198 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:18.198 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:18.198 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:23:18.198 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:18.198 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:18.198 [2024-07-24 18:26:26.710220] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:18.198 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:18.198 "name": "Existed_Raid", 00:23:18.198 "aliases": [ 00:23:18.198 "8f2eba04-9c24-4e4f-a989-4e9f7e844076" 00:23:18.198 ], 00:23:18.198 "product_name": "Raid Volume", 00:23:18.198 "block_size": 4128, 00:23:18.198 "num_blocks": 7936, 00:23:18.198 "uuid": "8f2eba04-9c24-4e4f-a989-4e9f7e844076", 00:23:18.199 "md_size": 32, 00:23:18.199 "md_interleave": true, 00:23:18.199 "dif_type": 0, 00:23:18.199 "assigned_rate_limits": { 00:23:18.199 "rw_ios_per_sec": 0, 00:23:18.199 "rw_mbytes_per_sec": 0, 00:23:18.199 "r_mbytes_per_sec": 0, 00:23:18.199 "w_mbytes_per_sec": 0 00:23:18.199 }, 00:23:18.199 "claimed": false, 00:23:18.199 "zoned": false, 00:23:18.199 "supported_io_types": { 00:23:18.199 "read": true, 00:23:18.199 "write": true, 00:23:18.199 "unmap": false, 00:23:18.199 "flush": false, 00:23:18.199 "reset": true, 00:23:18.199 "nvme_admin": false, 00:23:18.199 "nvme_io": false, 00:23:18.199 "nvme_io_md": false, 00:23:18.199 "write_zeroes": true, 00:23:18.199 "zcopy": false, 00:23:18.199 "get_zone_info": false, 00:23:18.199 "zone_management": false, 00:23:18.199 "zone_append": false, 00:23:18.199 "compare": false, 00:23:18.199 "compare_and_write": false, 00:23:18.199 "abort": false, 00:23:18.199 "seek_hole": false, 00:23:18.199 "seek_data": false, 00:23:18.199 "copy": false, 00:23:18.199 "nvme_iov_md": false 00:23:18.199 }, 00:23:18.199 "memory_domains": [ 00:23:18.199 { 00:23:18.199 "dma_device_id": "system", 00:23:18.199 "dma_device_type": 1 00:23:18.199 }, 00:23:18.199 { 00:23:18.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.199 "dma_device_type": 2 00:23:18.199 }, 00:23:18.199 { 00:23:18.199 "dma_device_id": "system", 00:23:18.199 "dma_device_type": 1 00:23:18.199 }, 00:23:18.199 { 00:23:18.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.199 "dma_device_type": 2 00:23:18.199 } 00:23:18.199 ], 00:23:18.199 "driver_specific": { 00:23:18.199 "raid": { 00:23:18.199 "uuid": "8f2eba04-9c24-4e4f-a989-4e9f7e844076", 00:23:18.199 "strip_size_kb": 0, 00:23:18.199 "state": "online", 00:23:18.199 "raid_level": "raid1", 00:23:18.199 "superblock": true, 00:23:18.199 "num_base_bdevs": 2, 00:23:18.199 "num_base_bdevs_discovered": 2, 00:23:18.199 "num_base_bdevs_operational": 2, 00:23:18.199 "base_bdevs_list": [ 00:23:18.199 { 00:23:18.199 "name": "BaseBdev1", 00:23:18.199 "uuid": "5506d1ac-5f17-402e-b06c-f40b58cbba90", 00:23:18.199 "is_configured": true, 00:23:18.199 "data_offset": 256, 00:23:18.199 "data_size": 7936 00:23:18.199 }, 00:23:18.199 { 00:23:18.199 "name": "BaseBdev2", 00:23:18.199 "uuid": "b46aa7e4-f511-4a11-ae7d-3517dabdea58", 00:23:18.199 "is_configured": true, 00:23:18.199 "data_offset": 256, 00:23:18.199 "data_size": 7936 00:23:18.199 } 00:23:18.199 ] 00:23:18.199 } 00:23:18.199 } 00:23:18.199 }' 00:23:18.199 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:18.199 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:18.199 BaseBdev2' 00:23:18.199 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:18.199 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:18.199 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:18.458 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:18.458 "name": "BaseBdev1", 00:23:18.458 "aliases": [ 00:23:18.458 "5506d1ac-5f17-402e-b06c-f40b58cbba90" 00:23:18.458 ], 00:23:18.458 "product_name": "Malloc disk", 00:23:18.458 "block_size": 4128, 00:23:18.458 "num_blocks": 8192, 00:23:18.458 "uuid": "5506d1ac-5f17-402e-b06c-f40b58cbba90", 00:23:18.458 "md_size": 32, 00:23:18.458 "md_interleave": true, 00:23:18.458 "dif_type": 0, 00:23:18.458 "assigned_rate_limits": { 00:23:18.458 "rw_ios_per_sec": 0, 00:23:18.458 "rw_mbytes_per_sec": 0, 00:23:18.458 "r_mbytes_per_sec": 0, 00:23:18.458 "w_mbytes_per_sec": 0 00:23:18.458 }, 00:23:18.458 "claimed": true, 00:23:18.458 "claim_type": "exclusive_write", 00:23:18.458 "zoned": false, 00:23:18.458 "supported_io_types": { 00:23:18.458 "read": true, 00:23:18.458 "write": true, 00:23:18.458 "unmap": true, 00:23:18.458 "flush": true, 00:23:18.458 "reset": true, 00:23:18.458 "nvme_admin": false, 00:23:18.458 "nvme_io": false, 00:23:18.458 "nvme_io_md": false, 00:23:18.458 "write_zeroes": true, 00:23:18.458 "zcopy": true, 00:23:18.458 "get_zone_info": false, 00:23:18.458 "zone_management": false, 00:23:18.458 "zone_append": false, 00:23:18.458 "compare": false, 00:23:18.458 "compare_and_write": false, 00:23:18.458 "abort": true, 00:23:18.458 "seek_hole": false, 00:23:18.458 "seek_data": false, 00:23:18.458 "copy": true, 00:23:18.458 "nvme_iov_md": false 00:23:18.458 }, 00:23:18.458 "memory_domains": [ 00:23:18.458 { 00:23:18.458 "dma_device_id": "system", 00:23:18.458 "dma_device_type": 1 00:23:18.458 }, 00:23:18.458 { 00:23:18.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.458 "dma_device_type": 2 00:23:18.458 } 00:23:18.458 ], 00:23:18.458 "driver_specific": {} 00:23:18.458 }' 00:23:18.458 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:18.458 18:26:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:18.458 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:18.458 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:18.718 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:18.718 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:18.718 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:18.718 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:18.718 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:18.718 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:18.718 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:18.718 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:18.718 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:18.718 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:18.718 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:18.977 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:18.977 "name": "BaseBdev2", 00:23:18.977 "aliases": [ 00:23:18.977 "b46aa7e4-f511-4a11-ae7d-3517dabdea58" 00:23:18.977 ], 00:23:18.977 "product_name": "Malloc disk", 00:23:18.977 "block_size": 4128, 00:23:18.977 "num_blocks": 8192, 00:23:18.977 "uuid": "b46aa7e4-f511-4a11-ae7d-3517dabdea58", 00:23:18.977 "md_size": 32, 00:23:18.977 "md_interleave": true, 00:23:18.977 "dif_type": 0, 00:23:18.977 "assigned_rate_limits": { 00:23:18.977 "rw_ios_per_sec": 0, 00:23:18.977 "rw_mbytes_per_sec": 0, 00:23:18.977 "r_mbytes_per_sec": 0, 00:23:18.977 "w_mbytes_per_sec": 0 00:23:18.977 }, 00:23:18.977 "claimed": true, 00:23:18.977 "claim_type": "exclusive_write", 00:23:18.977 "zoned": false, 00:23:18.977 "supported_io_types": { 00:23:18.977 "read": true, 00:23:18.977 "write": true, 00:23:18.977 "unmap": true, 00:23:18.977 "flush": true, 00:23:18.977 "reset": true, 00:23:18.977 "nvme_admin": false, 00:23:18.977 "nvme_io": false, 00:23:18.977 "nvme_io_md": false, 00:23:18.977 "write_zeroes": true, 00:23:18.977 "zcopy": true, 00:23:18.977 "get_zone_info": false, 00:23:18.977 "zone_management": false, 00:23:18.977 "zone_append": false, 00:23:18.977 "compare": false, 00:23:18.977 "compare_and_write": false, 00:23:18.977 "abort": true, 00:23:18.977 "seek_hole": false, 00:23:18.977 "seek_data": false, 00:23:18.977 "copy": true, 00:23:18.977 "nvme_iov_md": false 00:23:18.977 }, 00:23:18.977 "memory_domains": [ 00:23:18.977 { 00:23:18.977 "dma_device_id": "system", 00:23:18.977 "dma_device_type": 1 00:23:18.977 }, 00:23:18.977 { 00:23:18.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.977 "dma_device_type": 2 00:23:18.977 } 00:23:18.977 ], 00:23:18.977 "driver_specific": {} 00:23:18.977 }' 00:23:18.977 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:18.977 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:18.977 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:18.977 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:18.977 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:19.236 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:19.236 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.236 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:19.236 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:19.236 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:19.236 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:19.236 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:19.236 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:19.496 [2024-07-24 18:26:27.881103] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.496 18:26:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.496 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.496 "name": "Existed_Raid", 00:23:19.496 "uuid": "8f2eba04-9c24-4e4f-a989-4e9f7e844076", 00:23:19.496 "strip_size_kb": 0, 00:23:19.496 "state": "online", 00:23:19.496 "raid_level": "raid1", 00:23:19.496 "superblock": true, 00:23:19.496 "num_base_bdevs": 2, 00:23:19.496 "num_base_bdevs_discovered": 1, 00:23:19.496 "num_base_bdevs_operational": 1, 00:23:19.496 "base_bdevs_list": [ 00:23:19.496 { 00:23:19.496 "name": null, 00:23:19.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.496 "is_configured": false, 00:23:19.496 "data_offset": 256, 00:23:19.496 "data_size": 7936 00:23:19.496 }, 00:23:19.496 { 00:23:19.496 "name": "BaseBdev2", 00:23:19.496 "uuid": "b46aa7e4-f511-4a11-ae7d-3517dabdea58", 00:23:19.496 "is_configured": true, 00:23:19.496 "data_offset": 256, 00:23:19.496 "data_size": 7936 00:23:19.496 } 00:23:19.496 ] 00:23:19.496 }' 00:23:19.496 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.496 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:20.063 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:20.063 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:20.063 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.063 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:20.323 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:20.323 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:20.323 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:20.323 [2024-07-24 18:26:28.884619] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:20.323 [2024-07-24 18:26:28.884683] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:20.323 [2024-07-24 18:26:28.894890] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:20.323 [2024-07-24 18:26:28.894930] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:20.323 [2024-07-24 18:26:28.894937] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2652990 name Existed_Raid, state offline 00:23:20.323 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:20.323 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:20.323 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.323 18:26:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 2309182 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 2309182 ']' 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 2309182 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2309182 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2309182' 00:23:20.583 killing process with pid 2309182 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 2309182 00:23:20.583 [2024-07-24 18:26:29.136751] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:20.583 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 2309182 00:23:20.583 [2024-07-24 18:26:29.137528] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:20.873 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:23:20.873 00:23:20.873 real 0m8.089s 00:23:20.873 user 0m14.192s 00:23:20.873 sys 0m1.641s 00:23:20.873 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:20.873 18:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:20.873 ************************************ 00:23:20.873 END TEST raid_state_function_test_sb_md_interleaved 00:23:20.873 ************************************ 00:23:20.873 18:26:29 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:23:20.873 18:26:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:20.873 18:26:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:20.873 18:26:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:20.873 ************************************ 00:23:20.873 START TEST raid_superblock_test_md_interleaved 00:23:20.873 ************************************ 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=2310751 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 2310751 /var/tmp/spdk-raid.sock 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 2310751 ']' 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.873 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:20.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:20.874 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.874 18:26:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:20.874 [2024-07-24 18:26:29.451202] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:23:20.874 [2024-07-24 18:26:29.451247] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310751 ] 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:01.0 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:01.1 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:01.2 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:01.3 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:01.4 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:01.5 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:01.6 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:01.7 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:02.0 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:02.1 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:02.2 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:02.3 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:02.4 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:02.5 cannot be used 00:23:21.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.133 EAL: Requested device 0000:b3:02.6 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b3:02.7 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:01.0 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:01.1 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:01.2 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:01.3 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:01.4 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:01.5 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:01.6 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:01.7 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:02.0 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:02.1 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:02.2 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:02.3 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:02.4 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:02.5 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:02.6 cannot be used 00:23:21.134 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:21.134 EAL: Requested device 0000:b5:02.7 cannot be used 00:23:21.134 [2024-07-24 18:26:29.543866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.134 [2024-07-24 18:26:29.618658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.134 [2024-07-24 18:26:29.672753] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:21.134 [2024-07-24 18:26:29.672777] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:21.702 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.702 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:23:21.702 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:23:21.702 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:21.702 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:23:21.702 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:23:21.702 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:21.702 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:21.702 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:21.702 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:21.702 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:23:21.961 malloc1 00:23:21.961 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:22.220 [2024-07-24 18:26:30.573583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:22.220 [2024-07-24 18:26:30.573628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.220 [2024-07-24 18:26:30.573657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1061c20 00:23:22.220 [2024-07-24 18:26:30.573665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.220 [2024-07-24 18:26:30.574614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.220 [2024-07-24 18:26:30.574642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:22.220 pt1 00:23:22.220 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:22.220 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:22.220 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:23:22.220 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:23:22.220 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:22.220 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:22.220 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:22.220 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:22.220 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:23:22.220 malloc2 00:23:22.220 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:22.479 [2024-07-24 18:26:30.898284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:22.479 [2024-07-24 18:26:30.898314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.479 [2024-07-24 18:26:30.898326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1046cb0 00:23:22.479 [2024-07-24 18:26:30.898349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.479 [2024-07-24 18:26:30.899245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.479 [2024-07-24 18:26:30.899265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:22.479 pt2 00:23:22.479 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:22.479 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:22.479 18:26:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:23:22.479 [2024-07-24 18:26:31.066736] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:22.479 [2024-07-24 18:26:31.067605] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:22.479 [2024-07-24 18:26:31.067709] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1048660 00:23:22.479 [2024-07-24 18:26:31.067718] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:22.479 [2024-07-24 18:26:31.067757] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xec4790 00:23:22.479 [2024-07-24 18:26:31.067819] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1048660 00:23:22.479 [2024-07-24 18:26:31.067825] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1048660 00:23:22.479 [2024-07-24 18:26:31.067858] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.738 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:22.738 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:22.738 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:22.738 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:22.738 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:22.738 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:22.739 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.739 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.739 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.739 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.739 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.739 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.739 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.739 "name": "raid_bdev1", 00:23:22.739 "uuid": "6c22f463-83f6-4a3a-a88e-af68959c7fdb", 00:23:22.739 "strip_size_kb": 0, 00:23:22.739 "state": "online", 00:23:22.739 "raid_level": "raid1", 00:23:22.739 "superblock": true, 00:23:22.739 "num_base_bdevs": 2, 00:23:22.739 "num_base_bdevs_discovered": 2, 00:23:22.739 "num_base_bdevs_operational": 2, 00:23:22.739 "base_bdevs_list": [ 00:23:22.739 { 00:23:22.739 "name": "pt1", 00:23:22.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:22.739 "is_configured": true, 00:23:22.739 "data_offset": 256, 00:23:22.739 "data_size": 7936 00:23:22.739 }, 00:23:22.739 { 00:23:22.739 "name": "pt2", 00:23:22.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:22.739 "is_configured": true, 00:23:22.739 "data_offset": 256, 00:23:22.739 "data_size": 7936 00:23:22.739 } 00:23:22.739 ] 00:23:22.739 }' 00:23:22.739 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.739 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:23.307 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:23.307 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:23.307 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:23.307 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:23.307 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:23.307 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:23:23.307 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:23.307 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:23.566 [2024-07-24 18:26:31.921097] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:23.566 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:23.566 "name": "raid_bdev1", 00:23:23.566 "aliases": [ 00:23:23.566 "6c22f463-83f6-4a3a-a88e-af68959c7fdb" 00:23:23.566 ], 00:23:23.566 "product_name": "Raid Volume", 00:23:23.566 "block_size": 4128, 00:23:23.566 "num_blocks": 7936, 00:23:23.566 "uuid": "6c22f463-83f6-4a3a-a88e-af68959c7fdb", 00:23:23.566 "md_size": 32, 00:23:23.566 "md_interleave": true, 00:23:23.566 "dif_type": 0, 00:23:23.566 "assigned_rate_limits": { 00:23:23.566 "rw_ios_per_sec": 0, 00:23:23.566 "rw_mbytes_per_sec": 0, 00:23:23.566 "r_mbytes_per_sec": 0, 00:23:23.566 "w_mbytes_per_sec": 0 00:23:23.566 }, 00:23:23.566 "claimed": false, 00:23:23.566 "zoned": false, 00:23:23.566 "supported_io_types": { 00:23:23.566 "read": true, 00:23:23.566 "write": true, 00:23:23.566 "unmap": false, 00:23:23.566 "flush": false, 00:23:23.566 "reset": true, 00:23:23.566 "nvme_admin": false, 00:23:23.566 "nvme_io": false, 00:23:23.566 "nvme_io_md": false, 00:23:23.566 "write_zeroes": true, 00:23:23.566 "zcopy": false, 00:23:23.566 "get_zone_info": false, 00:23:23.566 "zone_management": false, 00:23:23.566 "zone_append": false, 00:23:23.566 "compare": false, 00:23:23.566 "compare_and_write": false, 00:23:23.566 "abort": false, 00:23:23.566 "seek_hole": false, 00:23:23.566 "seek_data": false, 00:23:23.566 "copy": false, 00:23:23.566 "nvme_iov_md": false 00:23:23.566 }, 00:23:23.566 "memory_domains": [ 00:23:23.566 { 00:23:23.566 "dma_device_id": "system", 00:23:23.566 "dma_device_type": 1 00:23:23.566 }, 00:23:23.566 { 00:23:23.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.566 "dma_device_type": 2 00:23:23.566 }, 00:23:23.566 { 00:23:23.566 "dma_device_id": "system", 00:23:23.566 "dma_device_type": 1 00:23:23.566 }, 00:23:23.566 { 00:23:23.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.566 "dma_device_type": 2 00:23:23.566 } 00:23:23.566 ], 00:23:23.566 "driver_specific": { 00:23:23.566 "raid": { 00:23:23.566 "uuid": "6c22f463-83f6-4a3a-a88e-af68959c7fdb", 00:23:23.566 "strip_size_kb": 0, 00:23:23.566 "state": "online", 00:23:23.566 "raid_level": "raid1", 00:23:23.566 "superblock": true, 00:23:23.566 "num_base_bdevs": 2, 00:23:23.566 "num_base_bdevs_discovered": 2, 00:23:23.566 "num_base_bdevs_operational": 2, 00:23:23.566 "base_bdevs_list": [ 00:23:23.566 { 00:23:23.566 "name": "pt1", 00:23:23.566 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:23.566 "is_configured": true, 00:23:23.566 "data_offset": 256, 00:23:23.566 "data_size": 7936 00:23:23.566 }, 00:23:23.566 { 00:23:23.566 "name": "pt2", 00:23:23.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:23.566 "is_configured": true, 00:23:23.566 "data_offset": 256, 00:23:23.566 "data_size": 7936 00:23:23.566 } 00:23:23.566 ] 00:23:23.566 } 00:23:23.566 } 00:23:23.566 }' 00:23:23.566 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:23.566 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:23.566 pt2' 00:23:23.567 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:23.567 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:23.567 18:26:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:23.567 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:23.567 "name": "pt1", 00:23:23.567 "aliases": [ 00:23:23.567 "00000000-0000-0000-0000-000000000001" 00:23:23.567 ], 00:23:23.567 "product_name": "passthru", 00:23:23.567 "block_size": 4128, 00:23:23.567 "num_blocks": 8192, 00:23:23.567 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:23.567 "md_size": 32, 00:23:23.567 "md_interleave": true, 00:23:23.567 "dif_type": 0, 00:23:23.567 "assigned_rate_limits": { 00:23:23.567 "rw_ios_per_sec": 0, 00:23:23.567 "rw_mbytes_per_sec": 0, 00:23:23.567 "r_mbytes_per_sec": 0, 00:23:23.567 "w_mbytes_per_sec": 0 00:23:23.567 }, 00:23:23.567 "claimed": true, 00:23:23.567 "claim_type": "exclusive_write", 00:23:23.567 "zoned": false, 00:23:23.567 "supported_io_types": { 00:23:23.567 "read": true, 00:23:23.567 "write": true, 00:23:23.567 "unmap": true, 00:23:23.567 "flush": true, 00:23:23.567 "reset": true, 00:23:23.567 "nvme_admin": false, 00:23:23.567 "nvme_io": false, 00:23:23.567 "nvme_io_md": false, 00:23:23.567 "write_zeroes": true, 00:23:23.567 "zcopy": true, 00:23:23.567 "get_zone_info": false, 00:23:23.567 "zone_management": false, 00:23:23.567 "zone_append": false, 00:23:23.567 "compare": false, 00:23:23.567 "compare_and_write": false, 00:23:23.567 "abort": true, 00:23:23.567 "seek_hole": false, 00:23:23.567 "seek_data": false, 00:23:23.567 "copy": true, 00:23:23.567 "nvme_iov_md": false 00:23:23.567 }, 00:23:23.567 "memory_domains": [ 00:23:23.567 { 00:23:23.567 "dma_device_id": "system", 00:23:23.567 "dma_device_type": 1 00:23:23.567 }, 00:23:23.567 { 00:23:23.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.567 "dma_device_type": 2 00:23:23.567 } 00:23:23.567 ], 00:23:23.567 "driver_specific": { 00:23:23.567 "passthru": { 00:23:23.567 "name": "pt1", 00:23:23.567 "base_bdev_name": "malloc1" 00:23:23.567 } 00:23:23.567 } 00:23:23.567 }' 00:23:23.567 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:23.826 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:23.826 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:23.826 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:23.826 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:23.826 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:23.826 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:23.826 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:23.826 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:23.826 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:23.826 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:24.085 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:24.085 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:24.085 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:24.085 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:24.085 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:24.085 "name": "pt2", 00:23:24.085 "aliases": [ 00:23:24.085 "00000000-0000-0000-0000-000000000002" 00:23:24.085 ], 00:23:24.085 "product_name": "passthru", 00:23:24.085 "block_size": 4128, 00:23:24.085 "num_blocks": 8192, 00:23:24.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:24.085 "md_size": 32, 00:23:24.085 "md_interleave": true, 00:23:24.085 "dif_type": 0, 00:23:24.085 "assigned_rate_limits": { 00:23:24.085 "rw_ios_per_sec": 0, 00:23:24.085 "rw_mbytes_per_sec": 0, 00:23:24.085 "r_mbytes_per_sec": 0, 00:23:24.085 "w_mbytes_per_sec": 0 00:23:24.085 }, 00:23:24.085 "claimed": true, 00:23:24.085 "claim_type": "exclusive_write", 00:23:24.085 "zoned": false, 00:23:24.085 "supported_io_types": { 00:23:24.085 "read": true, 00:23:24.085 "write": true, 00:23:24.085 "unmap": true, 00:23:24.085 "flush": true, 00:23:24.085 "reset": true, 00:23:24.085 "nvme_admin": false, 00:23:24.085 "nvme_io": false, 00:23:24.085 "nvme_io_md": false, 00:23:24.085 "write_zeroes": true, 00:23:24.085 "zcopy": true, 00:23:24.085 "get_zone_info": false, 00:23:24.085 "zone_management": false, 00:23:24.085 "zone_append": false, 00:23:24.085 "compare": false, 00:23:24.085 "compare_and_write": false, 00:23:24.085 "abort": true, 00:23:24.085 "seek_hole": false, 00:23:24.085 "seek_data": false, 00:23:24.085 "copy": true, 00:23:24.085 "nvme_iov_md": false 00:23:24.085 }, 00:23:24.085 "memory_domains": [ 00:23:24.085 { 00:23:24.085 "dma_device_id": "system", 00:23:24.085 "dma_device_type": 1 00:23:24.085 }, 00:23:24.085 { 00:23:24.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.085 "dma_device_type": 2 00:23:24.085 } 00:23:24.085 ], 00:23:24.085 "driver_specific": { 00:23:24.085 "passthru": { 00:23:24.085 "name": "pt2", 00:23:24.085 "base_bdev_name": "malloc2" 00:23:24.085 } 00:23:24.085 } 00:23:24.085 }' 00:23:24.085 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:24.085 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:24.344 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:24.344 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:24.344 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:24.344 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:24.344 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:24.344 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:24.344 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:24.344 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:24.344 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:24.344 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:24.344 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:24.344 18:26:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:23:24.603 [2024-07-24 18:26:33.072024] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:24.603 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6c22f463-83f6-4a3a-a88e-af68959c7fdb 00:23:24.603 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 6c22f463-83f6-4a3a-a88e-af68959c7fdb ']' 00:23:24.603 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:24.864 [2024-07-24 18:26:33.244322] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:24.864 [2024-07-24 18:26:33.244335] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:24.864 [2024-07-24 18:26:33.244375] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.864 [2024-07-24 18:26:33.244413] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:24.864 [2024-07-24 18:26:33.244420] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1048660 name raid_bdev1, state offline 00:23:24.864 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.864 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:23:24.864 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:23:24.864 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:23:24.864 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:24.864 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:25.124 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:25.124 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:25.383 18:26:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:25.643 [2024-07-24 18:26:34.090489] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:25.643 [2024-07-24 18:26:34.091464] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:25.643 [2024-07-24 18:26:34.091508] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:25.643 [2024-07-24 18:26:34.091538] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:25.643 [2024-07-24 18:26:34.091566] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:25.643 [2024-07-24 18:26:34.091572] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10529a0 name raid_bdev1, state configuring 00:23:25.643 request: 00:23:25.643 { 00:23:25.643 "name": "raid_bdev1", 00:23:25.643 "raid_level": "raid1", 00:23:25.643 "base_bdevs": [ 00:23:25.643 "malloc1", 00:23:25.643 "malloc2" 00:23:25.643 ], 00:23:25.643 "superblock": false, 00:23:25.643 "method": "bdev_raid_create", 00:23:25.643 "req_id": 1 00:23:25.643 } 00:23:25.643 Got JSON-RPC error response 00:23:25.643 response: 00:23:25.643 { 00:23:25.643 "code": -17, 00:23:25.643 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:25.643 } 00:23:25.643 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:23:25.643 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:25.643 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:25.643 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:25.643 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.643 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:25.902 [2024-07-24 18:26:34.427322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:25.902 [2024-07-24 18:26:34.427351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.902 [2024-07-24 18:26:34.427363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1049740 00:23:25.902 [2024-07-24 18:26:34.427371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.902 [2024-07-24 18:26:34.428434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.902 [2024-07-24 18:26:34.428460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:25.902 [2024-07-24 18:26:34.428504] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:25.902 [2024-07-24 18:26:34.428527] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:25.902 pt1 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.902 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.161 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:26.161 "name": "raid_bdev1", 00:23:26.161 "uuid": "6c22f463-83f6-4a3a-a88e-af68959c7fdb", 00:23:26.161 "strip_size_kb": 0, 00:23:26.161 "state": "configuring", 00:23:26.161 "raid_level": "raid1", 00:23:26.161 "superblock": true, 00:23:26.161 "num_base_bdevs": 2, 00:23:26.161 "num_base_bdevs_discovered": 1, 00:23:26.161 "num_base_bdevs_operational": 2, 00:23:26.161 "base_bdevs_list": [ 00:23:26.161 { 00:23:26.161 "name": "pt1", 00:23:26.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:26.161 "is_configured": true, 00:23:26.161 "data_offset": 256, 00:23:26.161 "data_size": 7936 00:23:26.161 }, 00:23:26.161 { 00:23:26.161 "name": null, 00:23:26.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:26.161 "is_configured": false, 00:23:26.161 "data_offset": 256, 00:23:26.161 "data_size": 7936 00:23:26.161 } 00:23:26.161 ] 00:23:26.161 }' 00:23:26.161 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:26.161 18:26:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:26.730 [2024-07-24 18:26:35.257470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:26.730 [2024-07-24 18:26:35.257511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.730 [2024-07-24 18:26:35.257529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x104b9b0 00:23:26.730 [2024-07-24 18:26:35.257537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.730 [2024-07-24 18:26:35.257696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.730 [2024-07-24 18:26:35.257707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:26.730 [2024-07-24 18:26:35.257739] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:26.730 [2024-07-24 18:26:35.257752] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:26.730 [2024-07-24 18:26:35.257812] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xec5350 00:23:26.730 [2024-07-24 18:26:35.257819] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:26.730 [2024-07-24 18:26:35.257860] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1047480 00:23:26.730 [2024-07-24 18:26:35.257911] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xec5350 00:23:26.730 [2024-07-24 18:26:35.257917] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xec5350 00:23:26.730 [2024-07-24 18:26:35.257956] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.730 pt2 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.730 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.989 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:26.989 "name": "raid_bdev1", 00:23:26.989 "uuid": "6c22f463-83f6-4a3a-a88e-af68959c7fdb", 00:23:26.989 "strip_size_kb": 0, 00:23:26.989 "state": "online", 00:23:26.989 "raid_level": "raid1", 00:23:26.989 "superblock": true, 00:23:26.989 "num_base_bdevs": 2, 00:23:26.989 "num_base_bdevs_discovered": 2, 00:23:26.989 "num_base_bdevs_operational": 2, 00:23:26.989 "base_bdevs_list": [ 00:23:26.989 { 00:23:26.989 "name": "pt1", 00:23:26.989 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:26.989 "is_configured": true, 00:23:26.989 "data_offset": 256, 00:23:26.989 "data_size": 7936 00:23:26.989 }, 00:23:26.989 { 00:23:26.989 "name": "pt2", 00:23:26.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:26.989 "is_configured": true, 00:23:26.989 "data_offset": 256, 00:23:26.989 "data_size": 7936 00:23:26.989 } 00:23:26.989 ] 00:23:26.989 }' 00:23:26.989 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:26.989 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.557 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:23:27.557 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:27.557 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:27.557 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:27.557 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:27.557 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:23:27.557 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:27.557 18:26:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:27.557 [2024-07-24 18:26:36.099840] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:27.557 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:27.557 "name": "raid_bdev1", 00:23:27.557 "aliases": [ 00:23:27.557 "6c22f463-83f6-4a3a-a88e-af68959c7fdb" 00:23:27.557 ], 00:23:27.557 "product_name": "Raid Volume", 00:23:27.557 "block_size": 4128, 00:23:27.557 "num_blocks": 7936, 00:23:27.557 "uuid": "6c22f463-83f6-4a3a-a88e-af68959c7fdb", 00:23:27.557 "md_size": 32, 00:23:27.557 "md_interleave": true, 00:23:27.557 "dif_type": 0, 00:23:27.557 "assigned_rate_limits": { 00:23:27.557 "rw_ios_per_sec": 0, 00:23:27.557 "rw_mbytes_per_sec": 0, 00:23:27.557 "r_mbytes_per_sec": 0, 00:23:27.557 "w_mbytes_per_sec": 0 00:23:27.557 }, 00:23:27.557 "claimed": false, 00:23:27.557 "zoned": false, 00:23:27.558 "supported_io_types": { 00:23:27.558 "read": true, 00:23:27.558 "write": true, 00:23:27.558 "unmap": false, 00:23:27.558 "flush": false, 00:23:27.558 "reset": true, 00:23:27.558 "nvme_admin": false, 00:23:27.558 "nvme_io": false, 00:23:27.558 "nvme_io_md": false, 00:23:27.558 "write_zeroes": true, 00:23:27.558 "zcopy": false, 00:23:27.558 "get_zone_info": false, 00:23:27.558 "zone_management": false, 00:23:27.558 "zone_append": false, 00:23:27.558 "compare": false, 00:23:27.558 "compare_and_write": false, 00:23:27.558 "abort": false, 00:23:27.558 "seek_hole": false, 00:23:27.558 "seek_data": false, 00:23:27.558 "copy": false, 00:23:27.558 "nvme_iov_md": false 00:23:27.558 }, 00:23:27.558 "memory_domains": [ 00:23:27.558 { 00:23:27.558 "dma_device_id": "system", 00:23:27.558 "dma_device_type": 1 00:23:27.558 }, 00:23:27.558 { 00:23:27.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.558 "dma_device_type": 2 00:23:27.558 }, 00:23:27.558 { 00:23:27.558 "dma_device_id": "system", 00:23:27.558 "dma_device_type": 1 00:23:27.558 }, 00:23:27.558 { 00:23:27.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.558 "dma_device_type": 2 00:23:27.558 } 00:23:27.558 ], 00:23:27.558 "driver_specific": { 00:23:27.558 "raid": { 00:23:27.558 "uuid": "6c22f463-83f6-4a3a-a88e-af68959c7fdb", 00:23:27.558 "strip_size_kb": 0, 00:23:27.558 "state": "online", 00:23:27.558 "raid_level": "raid1", 00:23:27.558 "superblock": true, 00:23:27.558 "num_base_bdevs": 2, 00:23:27.558 "num_base_bdevs_discovered": 2, 00:23:27.558 "num_base_bdevs_operational": 2, 00:23:27.558 "base_bdevs_list": [ 00:23:27.558 { 00:23:27.558 "name": "pt1", 00:23:27.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:27.558 "is_configured": true, 00:23:27.558 "data_offset": 256, 00:23:27.558 "data_size": 7936 00:23:27.558 }, 00:23:27.558 { 00:23:27.558 "name": "pt2", 00:23:27.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:27.558 "is_configured": true, 00:23:27.558 "data_offset": 256, 00:23:27.558 "data_size": 7936 00:23:27.558 } 00:23:27.558 ] 00:23:27.558 } 00:23:27.558 } 00:23:27.558 }' 00:23:27.558 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:27.817 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:27.817 pt2' 00:23:27.817 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:27.817 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:27.817 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:27.817 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:27.817 "name": "pt1", 00:23:27.817 "aliases": [ 00:23:27.817 "00000000-0000-0000-0000-000000000001" 00:23:27.817 ], 00:23:27.817 "product_name": "passthru", 00:23:27.817 "block_size": 4128, 00:23:27.817 "num_blocks": 8192, 00:23:27.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:27.817 "md_size": 32, 00:23:27.817 "md_interleave": true, 00:23:27.817 "dif_type": 0, 00:23:27.817 "assigned_rate_limits": { 00:23:27.817 "rw_ios_per_sec": 0, 00:23:27.817 "rw_mbytes_per_sec": 0, 00:23:27.817 "r_mbytes_per_sec": 0, 00:23:27.817 "w_mbytes_per_sec": 0 00:23:27.817 }, 00:23:27.817 "claimed": true, 00:23:27.817 "claim_type": "exclusive_write", 00:23:27.817 "zoned": false, 00:23:27.817 "supported_io_types": { 00:23:27.817 "read": true, 00:23:27.817 "write": true, 00:23:27.817 "unmap": true, 00:23:27.817 "flush": true, 00:23:27.817 "reset": true, 00:23:27.817 "nvme_admin": false, 00:23:27.817 "nvme_io": false, 00:23:27.817 "nvme_io_md": false, 00:23:27.817 "write_zeroes": true, 00:23:27.817 "zcopy": true, 00:23:27.817 "get_zone_info": false, 00:23:27.817 "zone_management": false, 00:23:27.817 "zone_append": false, 00:23:27.817 "compare": false, 00:23:27.817 "compare_and_write": false, 00:23:27.817 "abort": true, 00:23:27.817 "seek_hole": false, 00:23:27.817 "seek_data": false, 00:23:27.817 "copy": true, 00:23:27.817 "nvme_iov_md": false 00:23:27.817 }, 00:23:27.817 "memory_domains": [ 00:23:27.817 { 00:23:27.817 "dma_device_id": "system", 00:23:27.817 "dma_device_type": 1 00:23:27.817 }, 00:23:27.817 { 00:23:27.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.817 "dma_device_type": 2 00:23:27.817 } 00:23:27.817 ], 00:23:27.817 "driver_specific": { 00:23:27.817 "passthru": { 00:23:27.817 "name": "pt1", 00:23:27.817 "base_bdev_name": "malloc1" 00:23:27.817 } 00:23:27.817 } 00:23:27.817 }' 00:23:27.817 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:27.817 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.076 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:28.076 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.076 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.076 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:28.076 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:28.076 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:28.076 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:28.076 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:28.076 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:28.336 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:28.336 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:28.336 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:28.336 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:28.336 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:28.336 "name": "pt2", 00:23:28.336 "aliases": [ 00:23:28.336 "00000000-0000-0000-0000-000000000002" 00:23:28.336 ], 00:23:28.336 "product_name": "passthru", 00:23:28.336 "block_size": 4128, 00:23:28.336 "num_blocks": 8192, 00:23:28.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:28.336 "md_size": 32, 00:23:28.336 "md_interleave": true, 00:23:28.336 "dif_type": 0, 00:23:28.336 "assigned_rate_limits": { 00:23:28.336 "rw_ios_per_sec": 0, 00:23:28.336 "rw_mbytes_per_sec": 0, 00:23:28.336 "r_mbytes_per_sec": 0, 00:23:28.336 "w_mbytes_per_sec": 0 00:23:28.336 }, 00:23:28.336 "claimed": true, 00:23:28.336 "claim_type": "exclusive_write", 00:23:28.336 "zoned": false, 00:23:28.336 "supported_io_types": { 00:23:28.336 "read": true, 00:23:28.336 "write": true, 00:23:28.336 "unmap": true, 00:23:28.336 "flush": true, 00:23:28.336 "reset": true, 00:23:28.336 "nvme_admin": false, 00:23:28.336 "nvme_io": false, 00:23:28.336 "nvme_io_md": false, 00:23:28.336 "write_zeroes": true, 00:23:28.336 "zcopy": true, 00:23:28.336 "get_zone_info": false, 00:23:28.336 "zone_management": false, 00:23:28.336 "zone_append": false, 00:23:28.336 "compare": false, 00:23:28.336 "compare_and_write": false, 00:23:28.336 "abort": true, 00:23:28.336 "seek_hole": false, 00:23:28.336 "seek_data": false, 00:23:28.336 "copy": true, 00:23:28.336 "nvme_iov_md": false 00:23:28.336 }, 00:23:28.336 "memory_domains": [ 00:23:28.336 { 00:23:28.336 "dma_device_id": "system", 00:23:28.336 "dma_device_type": 1 00:23:28.336 }, 00:23:28.336 { 00:23:28.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.336 "dma_device_type": 2 00:23:28.336 } 00:23:28.336 ], 00:23:28.336 "driver_specific": { 00:23:28.336 "passthru": { 00:23:28.336 "name": "pt2", 00:23:28.336 "base_bdev_name": "malloc2" 00:23:28.336 } 00:23:28.336 } 00:23:28.336 }' 00:23:28.336 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.336 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.336 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:28.336 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.595 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.595 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:28.595 18:26:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:28.595 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:28.595 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:28.595 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:28.595 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:28.595 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:28.595 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:28.595 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:23:28.854 [2024-07-24 18:26:37.298903] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:28.854 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 6c22f463-83f6-4a3a-a88e-af68959c7fdb '!=' 6c22f463-83f6-4a3a-a88e-af68959c7fdb ']' 00:23:28.854 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:23:28.854 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:28.854 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:23:28.854 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:29.113 [2024-07-24 18:26:37.471219] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:29.113 "name": "raid_bdev1", 00:23:29.113 "uuid": "6c22f463-83f6-4a3a-a88e-af68959c7fdb", 00:23:29.113 "strip_size_kb": 0, 00:23:29.113 "state": "online", 00:23:29.113 "raid_level": "raid1", 00:23:29.113 "superblock": true, 00:23:29.113 "num_base_bdevs": 2, 00:23:29.113 "num_base_bdevs_discovered": 1, 00:23:29.113 "num_base_bdevs_operational": 1, 00:23:29.113 "base_bdevs_list": [ 00:23:29.113 { 00:23:29.113 "name": null, 00:23:29.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.113 "is_configured": false, 00:23:29.113 "data_offset": 256, 00:23:29.113 "data_size": 7936 00:23:29.113 }, 00:23:29.113 { 00:23:29.113 "name": "pt2", 00:23:29.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:29.113 "is_configured": true, 00:23:29.113 "data_offset": 256, 00:23:29.113 "data_size": 7936 00:23:29.113 } 00:23:29.113 ] 00:23:29.113 }' 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:29.113 18:26:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:29.682 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:29.682 [2024-07-24 18:26:38.253218] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:29.682 [2024-07-24 18:26:38.253236] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:29.682 [2024-07-24 18:26:38.253274] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:29.682 [2024-07-24 18:26:38.253304] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:29.682 [2024-07-24 18:26:38.253311] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xec5350 name raid_bdev1, state offline 00:23:29.682 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.682 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:23:29.942 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:23:29.942 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:23:29.942 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:23:29.942 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:29.942 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:30.202 [2024-07-24 18:26:38.762624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:30.202 [2024-07-24 18:26:38.762661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.202 [2024-07-24 18:26:38.762675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x104a130 00:23:30.202 [2024-07-24 18:26:38.762683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.202 [2024-07-24 18:26:38.763770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.202 [2024-07-24 18:26:38.763794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:30.202 [2024-07-24 18:26:38.763830] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:30.202 [2024-07-24 18:26:38.763849] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:30.202 [2024-07-24 18:26:38.763899] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x104b5e0 00:23:30.202 [2024-07-24 18:26:38.763910] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:30.202 [2024-07-24 18:26:38.763953] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1049300 00:23:30.202 [2024-07-24 18:26:38.764003] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x104b5e0 00:23:30.202 [2024-07-24 18:26:38.764009] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x104b5e0 00:23:30.202 [2024-07-24 18:26:38.764046] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.202 pt2 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.202 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.461 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:30.461 "name": "raid_bdev1", 00:23:30.461 "uuid": "6c22f463-83f6-4a3a-a88e-af68959c7fdb", 00:23:30.461 "strip_size_kb": 0, 00:23:30.461 "state": "online", 00:23:30.461 "raid_level": "raid1", 00:23:30.461 "superblock": true, 00:23:30.461 "num_base_bdevs": 2, 00:23:30.461 "num_base_bdevs_discovered": 1, 00:23:30.461 "num_base_bdevs_operational": 1, 00:23:30.461 "base_bdevs_list": [ 00:23:30.461 { 00:23:30.461 "name": null, 00:23:30.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.461 "is_configured": false, 00:23:30.461 "data_offset": 256, 00:23:30.461 "data_size": 7936 00:23:30.461 }, 00:23:30.461 { 00:23:30.461 "name": "pt2", 00:23:30.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:30.461 "is_configured": true, 00:23:30.461 "data_offset": 256, 00:23:30.461 "data_size": 7936 00:23:30.461 } 00:23:30.461 ] 00:23:30.461 }' 00:23:30.461 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:30.461 18:26:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:31.030 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:31.030 [2024-07-24 18:26:39.556816] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:31.030 [2024-07-24 18:26:39.556837] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:31.030 [2024-07-24 18:26:39.556880] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:31.030 [2024-07-24 18:26:39.556914] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:31.030 [2024-07-24 18:26:39.556921] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x104b5e0 name raid_bdev1, state offline 00:23:31.030 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.030 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:23:31.290 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:23:31.290 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:23:31.290 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:23:31.290 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:31.550 [2024-07-24 18:26:39.905706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:31.550 [2024-07-24 18:26:39.905740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.550 [2024-07-24 18:26:39.905753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1049d60 00:23:31.550 [2024-07-24 18:26:39.905761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.550 [2024-07-24 18:26:39.906800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.550 [2024-07-24 18:26:39.906824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:31.550 [2024-07-24 18:26:39.906859] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:31.550 [2024-07-24 18:26:39.906877] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:31.550 [2024-07-24 18:26:39.906933] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:31.550 [2024-07-24 18:26:39.906941] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:31.550 [2024-07-24 18:26:39.906951] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x104bd80 name raid_bdev1, state configuring 00:23:31.550 [2024-07-24 18:26:39.906967] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:31.550 [2024-07-24 18:26:39.907003] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x104bd80 00:23:31.550 [2024-07-24 18:26:39.907009] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:31.550 [2024-07-24 18:26:39.907049] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x104af50 00:23:31.550 [2024-07-24 18:26:39.907099] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x104bd80 00:23:31.550 [2024-07-24 18:26:39.907105] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x104bd80 00:23:31.550 [2024-07-24 18:26:39.907144] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.550 pt1 00:23:31.550 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:23:31.550 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:31.550 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:31.550 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:31.550 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:31.550 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:31.550 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:31.550 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.550 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.550 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.550 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.550 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.550 18:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.550 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.550 "name": "raid_bdev1", 00:23:31.550 "uuid": "6c22f463-83f6-4a3a-a88e-af68959c7fdb", 00:23:31.550 "strip_size_kb": 0, 00:23:31.550 "state": "online", 00:23:31.550 "raid_level": "raid1", 00:23:31.550 "superblock": true, 00:23:31.550 "num_base_bdevs": 2, 00:23:31.550 "num_base_bdevs_discovered": 1, 00:23:31.550 "num_base_bdevs_operational": 1, 00:23:31.550 "base_bdevs_list": [ 00:23:31.550 { 00:23:31.550 "name": null, 00:23:31.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.550 "is_configured": false, 00:23:31.550 "data_offset": 256, 00:23:31.550 "data_size": 7936 00:23:31.550 }, 00:23:31.550 { 00:23:31.550 "name": "pt2", 00:23:31.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:31.550 "is_configured": true, 00:23:31.550 "data_offset": 256, 00:23:31.550 "data_size": 7936 00:23:31.550 } 00:23:31.550 ] 00:23:31.550 }' 00:23:31.550 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.550 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.119 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:32.119 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:23:32.379 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:23:32.379 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:32.379 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:23:32.379 [2024-07-24 18:26:40.932561] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:32.379 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 6c22f463-83f6-4a3a-a88e-af68959c7fdb '!=' 6c22f463-83f6-4a3a-a88e-af68959c7fdb ']' 00:23:32.379 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 2310751 00:23:32.379 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 2310751 ']' 00:23:32.379 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 2310751 00:23:32.379 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:23:32.379 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.379 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2310751 00:23:32.638 18:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:32.639 18:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:32.639 18:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2310751' 00:23:32.639 killing process with pid 2310751 00:23:32.639 18:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 2310751 00:23:32.639 [2024-07-24 18:26:41.001965] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:32.639 [2024-07-24 18:26:41.002009] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.639 [2024-07-24 18:26:41.002044] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:32.639 [2024-07-24 18:26:41.002052] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x104bd80 name raid_bdev1, state offline 00:23:32.639 18:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 2310751 00:23:32.639 [2024-07-24 18:26:41.017585] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:32.639 18:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:23:32.639 00:23:32.639 real 0m11.795s 00:23:32.639 user 0m21.184s 00:23:32.639 sys 0m2.398s 00:23:32.639 18:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:32.639 18:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.639 ************************************ 00:23:32.639 END TEST raid_superblock_test_md_interleaved 00:23:32.639 ************************************ 00:23:32.639 18:26:41 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:23:32.639 18:26:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:23:32.639 18:26:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:32.639 18:26:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:32.899 ************************************ 00:23:32.899 START TEST raid_rebuild_test_sb_md_interleaved 00:23:32.899 ************************************ 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=2313152 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 2313152 /var/tmp/spdk-raid.sock 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 2313152 ']' 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:32.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.899 18:26:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.899 [2024-07-24 18:26:41.331441] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:23:32.899 [2024-07-24 18:26:41.331484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2313152 ] 00:23:32.899 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:32.899 Zero copy mechanism will not be used. 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:01.0 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:01.1 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:01.2 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:01.3 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:01.4 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:01.5 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:01.6 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:01.7 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:02.0 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:02.1 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:02.2 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:02.3 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:02.4 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:02.5 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:02.6 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b3:02.7 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b5:01.0 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b5:01.1 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b5:01.2 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b5:01.3 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b5:01.4 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b5:01.5 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b5:01.6 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b5:01.7 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b5:02.0 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b5:02.1 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b5:02.2 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.899 EAL: Requested device 0000:b5:02.3 cannot be used 00:23:32.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.900 EAL: Requested device 0000:b5:02.4 cannot be used 00:23:32.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.900 EAL: Requested device 0000:b5:02.5 cannot be used 00:23:32.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.900 EAL: Requested device 0000:b5:02.6 cannot be used 00:23:32.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:32.900 EAL: Requested device 0000:b5:02.7 cannot be used 00:23:32.900 [2024-07-24 18:26:41.424038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.159 [2024-07-24 18:26:41.499424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.159 [2024-07-24 18:26:41.553458] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:33.159 [2024-07-24 18:26:41.553487] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:33.728 18:26:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:33.728 18:26:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:23:33.728 18:26:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:33.728 18:26:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:23:33.728 BaseBdev1_malloc 00:23:33.728 18:26:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:33.987 [2024-07-24 18:26:42.469675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:33.987 [2024-07-24 18:26:42.469712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.987 [2024-07-24 18:26:42.469730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1017390 00:23:33.987 [2024-07-24 18:26:42.469738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.987 [2024-07-24 18:26:42.470795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.987 [2024-07-24 18:26:42.470818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:33.987 BaseBdev1 00:23:33.987 18:26:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:33.987 18:26:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:23:34.247 BaseBdev2_malloc 00:23:34.247 18:26:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:34.247 [2024-07-24 18:26:42.838789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:34.247 [2024-07-24 18:26:42.838827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.247 [2024-07-24 18:26:42.838843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x100ea00 00:23:34.247 [2024-07-24 18:26:42.838852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.247 [2024-07-24 18:26:42.839812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.247 [2024-07-24 18:26:42.839833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:34.506 BaseBdev2 00:23:34.506 18:26:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:23:34.506 spare_malloc 00:23:34.506 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:34.765 spare_delay 00:23:34.765 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:35.055 [2024-07-24 18:26:43.368079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:35.055 [2024-07-24 18:26:43.368114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.055 [2024-07-24 18:26:43.368128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x100f620 00:23:35.055 [2024-07-24 18:26:43.368154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.055 [2024-07-24 18:26:43.369110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.055 [2024-07-24 18:26:43.369129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:35.055 spare 00:23:35.055 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:35.055 [2024-07-24 18:26:43.524502] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:35.055 [2024-07-24 18:26:43.525321] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:35.055 [2024-07-24 18:26:43.525439] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1013990 00:23:35.055 [2024-07-24 18:26:43.525448] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:35.055 [2024-07-24 18:26:43.525499] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe79fd0 00:23:35.055 [2024-07-24 18:26:43.525555] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1013990 00:23:35.055 [2024-07-24 18:26:43.525562] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1013990 00:23:35.055 [2024-07-24 18:26:43.525597] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.055 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:35.055 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:35.055 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:35.055 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:35.055 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:35.055 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:35.055 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:35.055 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:35.055 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:35.055 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:35.055 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.055 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.315 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:35.315 "name": "raid_bdev1", 00:23:35.315 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:35.315 "strip_size_kb": 0, 00:23:35.315 "state": "online", 00:23:35.315 "raid_level": "raid1", 00:23:35.315 "superblock": true, 00:23:35.315 "num_base_bdevs": 2, 00:23:35.315 "num_base_bdevs_discovered": 2, 00:23:35.315 "num_base_bdevs_operational": 2, 00:23:35.315 "base_bdevs_list": [ 00:23:35.315 { 00:23:35.315 "name": "BaseBdev1", 00:23:35.315 "uuid": "79bb4c97-6144-527b-9a9a-8523e7c5eec6", 00:23:35.315 "is_configured": true, 00:23:35.315 "data_offset": 256, 00:23:35.315 "data_size": 7936 00:23:35.315 }, 00:23:35.315 { 00:23:35.315 "name": "BaseBdev2", 00:23:35.315 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:35.315 "is_configured": true, 00:23:35.315 "data_offset": 256, 00:23:35.315 "data_size": 7936 00:23:35.315 } 00:23:35.315 ] 00:23:35.315 }' 00:23:35.315 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:35.315 18:26:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:35.882 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:35.883 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:35.883 [2024-07-24 18:26:44.382870] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:35.883 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:23:35.883 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.883 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:36.142 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:23:36.142 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:36.142 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:23:36.142 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:36.142 [2024-07-24 18:26:44.727579] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:36.414 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.415 "name": "raid_bdev1", 00:23:36.415 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:36.415 "strip_size_kb": 0, 00:23:36.415 "state": "online", 00:23:36.415 "raid_level": "raid1", 00:23:36.415 "superblock": true, 00:23:36.415 "num_base_bdevs": 2, 00:23:36.415 "num_base_bdevs_discovered": 1, 00:23:36.415 "num_base_bdevs_operational": 1, 00:23:36.415 "base_bdevs_list": [ 00:23:36.415 { 00:23:36.415 "name": null, 00:23:36.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.415 "is_configured": false, 00:23:36.415 "data_offset": 256, 00:23:36.415 "data_size": 7936 00:23:36.415 }, 00:23:36.415 { 00:23:36.415 "name": "BaseBdev2", 00:23:36.415 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:36.415 "is_configured": true, 00:23:36.415 "data_offset": 256, 00:23:36.415 "data_size": 7936 00:23:36.415 } 00:23:36.415 ] 00:23:36.415 }' 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.415 18:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:36.984 18:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:36.984 [2024-07-24 18:26:45.573762] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:36.984 [2024-07-24 18:26:45.576992] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10120b0 00:23:36.984 [2024-07-24 18:26:45.578663] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:37.243 18:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:38.181 18:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:38.181 18:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:38.181 18:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:38.181 18:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:38.181 18:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:38.181 18:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.181 18:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.440 18:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:38.440 "name": "raid_bdev1", 00:23:38.440 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:38.440 "strip_size_kb": 0, 00:23:38.440 "state": "online", 00:23:38.440 "raid_level": "raid1", 00:23:38.440 "superblock": true, 00:23:38.440 "num_base_bdevs": 2, 00:23:38.440 "num_base_bdevs_discovered": 2, 00:23:38.440 "num_base_bdevs_operational": 2, 00:23:38.440 "process": { 00:23:38.440 "type": "rebuild", 00:23:38.440 "target": "spare", 00:23:38.440 "progress": { 00:23:38.440 "blocks": 2816, 00:23:38.440 "percent": 35 00:23:38.440 } 00:23:38.440 }, 00:23:38.440 "base_bdevs_list": [ 00:23:38.440 { 00:23:38.440 "name": "spare", 00:23:38.440 "uuid": "bebe433b-f731-59a1-aba3-eca74ae63fae", 00:23:38.440 "is_configured": true, 00:23:38.440 "data_offset": 256, 00:23:38.440 "data_size": 7936 00:23:38.440 }, 00:23:38.440 { 00:23:38.440 "name": "BaseBdev2", 00:23:38.440 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:38.440 "is_configured": true, 00:23:38.440 "data_offset": 256, 00:23:38.440 "data_size": 7936 00:23:38.440 } 00:23:38.440 ] 00:23:38.440 }' 00:23:38.440 18:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:38.440 18:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:38.440 18:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:38.440 18:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:38.440 18:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:38.440 [2024-07-24 18:26:46.998842] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:38.699 [2024-07-24 18:26:47.089083] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:38.699 [2024-07-24 18:26:47.089112] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.699 [2024-07-24 18:26:47.089121] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:38.699 [2024-07-24 18:26:47.089126] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:38.699 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:38.699 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:38.699 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:38.699 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:38.699 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:38.699 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:38.699 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.699 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.699 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.699 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.699 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.699 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.700 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.700 "name": "raid_bdev1", 00:23:38.700 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:38.700 "strip_size_kb": 0, 00:23:38.700 "state": "online", 00:23:38.700 "raid_level": "raid1", 00:23:38.700 "superblock": true, 00:23:38.700 "num_base_bdevs": 2, 00:23:38.700 "num_base_bdevs_discovered": 1, 00:23:38.700 "num_base_bdevs_operational": 1, 00:23:38.700 "base_bdevs_list": [ 00:23:38.700 { 00:23:38.700 "name": null, 00:23:38.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.700 "is_configured": false, 00:23:38.700 "data_offset": 256, 00:23:38.700 "data_size": 7936 00:23:38.700 }, 00:23:38.700 { 00:23:38.700 "name": "BaseBdev2", 00:23:38.700 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:38.700 "is_configured": true, 00:23:38.700 "data_offset": 256, 00:23:38.700 "data_size": 7936 00:23:38.700 } 00:23:38.700 ] 00:23:38.700 }' 00:23:38.700 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.700 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.268 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:39.268 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:39.268 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:39.268 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:39.268 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:39.268 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.268 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.528 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:39.528 "name": "raid_bdev1", 00:23:39.528 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:39.528 "strip_size_kb": 0, 00:23:39.528 "state": "online", 00:23:39.528 "raid_level": "raid1", 00:23:39.528 "superblock": true, 00:23:39.528 "num_base_bdevs": 2, 00:23:39.528 "num_base_bdevs_discovered": 1, 00:23:39.528 "num_base_bdevs_operational": 1, 00:23:39.528 "base_bdevs_list": [ 00:23:39.528 { 00:23:39.528 "name": null, 00:23:39.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.528 "is_configured": false, 00:23:39.528 "data_offset": 256, 00:23:39.528 "data_size": 7936 00:23:39.528 }, 00:23:39.528 { 00:23:39.528 "name": "BaseBdev2", 00:23:39.528 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:39.528 "is_configured": true, 00:23:39.528 "data_offset": 256, 00:23:39.528 "data_size": 7936 00:23:39.528 } 00:23:39.528 ] 00:23:39.528 }' 00:23:39.528 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:39.528 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:39.528 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:39.528 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:39.528 18:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:39.787 [2024-07-24 18:26:48.155274] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:39.787 [2024-07-24 18:26:48.158446] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1008f90 00:23:39.787 [2024-07-24 18:26:48.159460] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:39.787 18:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:40.723 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.723 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:40.723 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:40.723 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:40.723 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:40.723 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.723 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:40.982 "name": "raid_bdev1", 00:23:40.982 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:40.982 "strip_size_kb": 0, 00:23:40.982 "state": "online", 00:23:40.982 "raid_level": "raid1", 00:23:40.982 "superblock": true, 00:23:40.982 "num_base_bdevs": 2, 00:23:40.982 "num_base_bdevs_discovered": 2, 00:23:40.982 "num_base_bdevs_operational": 2, 00:23:40.982 "process": { 00:23:40.982 "type": "rebuild", 00:23:40.982 "target": "spare", 00:23:40.982 "progress": { 00:23:40.982 "blocks": 2816, 00:23:40.982 "percent": 35 00:23:40.982 } 00:23:40.982 }, 00:23:40.982 "base_bdevs_list": [ 00:23:40.982 { 00:23:40.982 "name": "spare", 00:23:40.982 "uuid": "bebe433b-f731-59a1-aba3-eca74ae63fae", 00:23:40.982 "is_configured": true, 00:23:40.982 "data_offset": 256, 00:23:40.982 "data_size": 7936 00:23:40.982 }, 00:23:40.982 { 00:23:40.982 "name": "BaseBdev2", 00:23:40.982 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:40.982 "is_configured": true, 00:23:40.982 "data_offset": 256, 00:23:40.982 "data_size": 7936 00:23:40.982 } 00:23:40.982 ] 00:23:40.982 }' 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:23:40.982 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=873 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.982 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.241 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:41.241 "name": "raid_bdev1", 00:23:41.241 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:41.241 "strip_size_kb": 0, 00:23:41.241 "state": "online", 00:23:41.241 "raid_level": "raid1", 00:23:41.241 "superblock": true, 00:23:41.241 "num_base_bdevs": 2, 00:23:41.241 "num_base_bdevs_discovered": 2, 00:23:41.241 "num_base_bdevs_operational": 2, 00:23:41.241 "process": { 00:23:41.241 "type": "rebuild", 00:23:41.241 "target": "spare", 00:23:41.241 "progress": { 00:23:41.241 "blocks": 3584, 00:23:41.241 "percent": 45 00:23:41.241 } 00:23:41.241 }, 00:23:41.241 "base_bdevs_list": [ 00:23:41.241 { 00:23:41.241 "name": "spare", 00:23:41.241 "uuid": "bebe433b-f731-59a1-aba3-eca74ae63fae", 00:23:41.241 "is_configured": true, 00:23:41.241 "data_offset": 256, 00:23:41.241 "data_size": 7936 00:23:41.241 }, 00:23:41.241 { 00:23:41.241 "name": "BaseBdev2", 00:23:41.241 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:41.241 "is_configured": true, 00:23:41.241 "data_offset": 256, 00:23:41.241 "data_size": 7936 00:23:41.241 } 00:23:41.241 ] 00:23:41.241 }' 00:23:41.241 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:41.241 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:41.241 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:41.241 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.241 18:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:42.179 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:42.179 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.179 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:42.179 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:42.179 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:42.179 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:42.179 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.179 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.478 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:42.478 "name": "raid_bdev1", 00:23:42.478 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:42.478 "strip_size_kb": 0, 00:23:42.478 "state": "online", 00:23:42.478 "raid_level": "raid1", 00:23:42.478 "superblock": true, 00:23:42.478 "num_base_bdevs": 2, 00:23:42.478 "num_base_bdevs_discovered": 2, 00:23:42.478 "num_base_bdevs_operational": 2, 00:23:42.478 "process": { 00:23:42.478 "type": "rebuild", 00:23:42.478 "target": "spare", 00:23:42.478 "progress": { 00:23:42.478 "blocks": 6656, 00:23:42.478 "percent": 83 00:23:42.478 } 00:23:42.478 }, 00:23:42.478 "base_bdevs_list": [ 00:23:42.478 { 00:23:42.478 "name": "spare", 00:23:42.478 "uuid": "bebe433b-f731-59a1-aba3-eca74ae63fae", 00:23:42.478 "is_configured": true, 00:23:42.478 "data_offset": 256, 00:23:42.478 "data_size": 7936 00:23:42.478 }, 00:23:42.478 { 00:23:42.478 "name": "BaseBdev2", 00:23:42.478 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:42.478 "is_configured": true, 00:23:42.478 "data_offset": 256, 00:23:42.478 "data_size": 7936 00:23:42.478 } 00:23:42.478 ] 00:23:42.478 }' 00:23:42.478 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:42.478 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.478 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:42.478 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.478 18:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:42.737 [2024-07-24 18:26:51.280881] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:42.737 [2024-07-24 18:26:51.280919] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:42.737 [2024-07-24 18:26:51.280993] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:43.672 18:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:43.672 18:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.672 18:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:43.672 18:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:43.672 18:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:43.672 18:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:43.672 18:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.672 18:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.672 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:43.672 "name": "raid_bdev1", 00:23:43.672 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:43.672 "strip_size_kb": 0, 00:23:43.672 "state": "online", 00:23:43.672 "raid_level": "raid1", 00:23:43.672 "superblock": true, 00:23:43.672 "num_base_bdevs": 2, 00:23:43.672 "num_base_bdevs_discovered": 2, 00:23:43.672 "num_base_bdevs_operational": 2, 00:23:43.672 "base_bdevs_list": [ 00:23:43.672 { 00:23:43.672 "name": "spare", 00:23:43.672 "uuid": "bebe433b-f731-59a1-aba3-eca74ae63fae", 00:23:43.672 "is_configured": true, 00:23:43.672 "data_offset": 256, 00:23:43.672 "data_size": 7936 00:23:43.672 }, 00:23:43.672 { 00:23:43.672 "name": "BaseBdev2", 00:23:43.672 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:43.672 "is_configured": true, 00:23:43.673 "data_offset": 256, 00:23:43.673 "data_size": 7936 00:23:43.673 } 00:23:43.673 ] 00:23:43.673 }' 00:23:43.673 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:43.673 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:43.673 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:43.673 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:43.673 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:23:43.673 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:43.673 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:43.673 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:43.673 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:43.673 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:43.673 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.673 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.931 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:43.931 "name": "raid_bdev1", 00:23:43.931 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:43.931 "strip_size_kb": 0, 00:23:43.931 "state": "online", 00:23:43.931 "raid_level": "raid1", 00:23:43.931 "superblock": true, 00:23:43.931 "num_base_bdevs": 2, 00:23:43.931 "num_base_bdevs_discovered": 2, 00:23:43.931 "num_base_bdevs_operational": 2, 00:23:43.931 "base_bdevs_list": [ 00:23:43.931 { 00:23:43.932 "name": "spare", 00:23:43.932 "uuid": "bebe433b-f731-59a1-aba3-eca74ae63fae", 00:23:43.932 "is_configured": true, 00:23:43.932 "data_offset": 256, 00:23:43.932 "data_size": 7936 00:23:43.932 }, 00:23:43.932 { 00:23:43.932 "name": "BaseBdev2", 00:23:43.932 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:43.932 "is_configured": true, 00:23:43.932 "data_offset": 256, 00:23:43.932 "data_size": 7936 00:23:43.932 } 00:23:43.932 ] 00:23:43.932 }' 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.932 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.191 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:44.191 "name": "raid_bdev1", 00:23:44.191 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:44.191 "strip_size_kb": 0, 00:23:44.191 "state": "online", 00:23:44.191 "raid_level": "raid1", 00:23:44.191 "superblock": true, 00:23:44.191 "num_base_bdevs": 2, 00:23:44.191 "num_base_bdevs_discovered": 2, 00:23:44.191 "num_base_bdevs_operational": 2, 00:23:44.191 "base_bdevs_list": [ 00:23:44.191 { 00:23:44.191 "name": "spare", 00:23:44.191 "uuid": "bebe433b-f731-59a1-aba3-eca74ae63fae", 00:23:44.191 "is_configured": true, 00:23:44.191 "data_offset": 256, 00:23:44.191 "data_size": 7936 00:23:44.191 }, 00:23:44.191 { 00:23:44.191 "name": "BaseBdev2", 00:23:44.191 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:44.191 "is_configured": true, 00:23:44.191 "data_offset": 256, 00:23:44.191 "data_size": 7936 00:23:44.191 } 00:23:44.191 ] 00:23:44.191 }' 00:23:44.191 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:44.191 18:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:44.450 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:44.709 [2024-07-24 18:26:53.189958] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:44.709 [2024-07-24 18:26:53.189978] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:44.709 [2024-07-24 18:26:53.190028] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:44.709 [2024-07-24 18:26:53.190068] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:44.709 [2024-07-24 18:26:53.190076] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1013990 name raid_bdev1, state offline 00:23:44.709 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.709 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:23:44.976 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:44.977 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:23:44.977 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:23:44.977 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:44.977 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:45.235 [2024-07-24 18:26:53.683217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:45.236 [2024-07-24 18:26:53.683249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.236 [2024-07-24 18:26:53.683266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe7a470 00:23:45.236 [2024-07-24 18:26:53.683274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.236 [2024-07-24 18:26:53.684506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.236 [2024-07-24 18:26:53.684531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:45.236 [2024-07-24 18:26:53.684574] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:45.236 [2024-07-24 18:26:53.684594] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:45.236 [2024-07-24 18:26:53.684664] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:45.236 spare 00:23:45.236 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:45.236 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:45.236 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:45.236 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:45.236 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:45.236 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:45.236 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:45.236 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:45.236 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:45.236 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:45.236 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.236 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.236 [2024-07-24 18:26:53.784952] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xe79e50 00:23:45.236 [2024-07-24 18:26:53.784964] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:45.236 [2024-07-24 18:26:53.785029] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1008ce0 00:23:45.236 [2024-07-24 18:26:53.785096] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe79e50 00:23:45.236 [2024-07-24 18:26:53.785102] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe79e50 00:23:45.236 [2024-07-24 18:26:53.785148] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.495 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.495 "name": "raid_bdev1", 00:23:45.495 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:45.495 "strip_size_kb": 0, 00:23:45.495 "state": "online", 00:23:45.495 "raid_level": "raid1", 00:23:45.495 "superblock": true, 00:23:45.495 "num_base_bdevs": 2, 00:23:45.495 "num_base_bdevs_discovered": 2, 00:23:45.496 "num_base_bdevs_operational": 2, 00:23:45.496 "base_bdevs_list": [ 00:23:45.496 { 00:23:45.496 "name": "spare", 00:23:45.496 "uuid": "bebe433b-f731-59a1-aba3-eca74ae63fae", 00:23:45.496 "is_configured": true, 00:23:45.496 "data_offset": 256, 00:23:45.496 "data_size": 7936 00:23:45.496 }, 00:23:45.496 { 00:23:45.496 "name": "BaseBdev2", 00:23:45.496 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:45.496 "is_configured": true, 00:23:45.496 "data_offset": 256, 00:23:45.496 "data_size": 7936 00:23:45.496 } 00:23:45.496 ] 00:23:45.496 }' 00:23:45.496 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.496 18:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.755 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:45.755 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:45.755 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:45.755 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:45.755 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:45.755 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.755 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.014 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:46.014 "name": "raid_bdev1", 00:23:46.014 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:46.014 "strip_size_kb": 0, 00:23:46.014 "state": "online", 00:23:46.014 "raid_level": "raid1", 00:23:46.014 "superblock": true, 00:23:46.014 "num_base_bdevs": 2, 00:23:46.014 "num_base_bdevs_discovered": 2, 00:23:46.014 "num_base_bdevs_operational": 2, 00:23:46.014 "base_bdevs_list": [ 00:23:46.014 { 00:23:46.014 "name": "spare", 00:23:46.014 "uuid": "bebe433b-f731-59a1-aba3-eca74ae63fae", 00:23:46.014 "is_configured": true, 00:23:46.014 "data_offset": 256, 00:23:46.014 "data_size": 7936 00:23:46.014 }, 00:23:46.014 { 00:23:46.014 "name": "BaseBdev2", 00:23:46.014 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:46.014 "is_configured": true, 00:23:46.014 "data_offset": 256, 00:23:46.014 "data_size": 7936 00:23:46.014 } 00:23:46.014 ] 00:23:46.014 }' 00:23:46.014 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:46.014 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:46.014 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:46.014 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:46.014 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.014 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:46.273 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:23:46.273 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:46.532 [2024-07-24 18:26:54.922478] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:46.532 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:46.532 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:46.532 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:46.532 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:46.532 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:46.532 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:46.532 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:46.532 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:46.532 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:46.532 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:46.532 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.532 18:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.532 18:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:46.532 "name": "raid_bdev1", 00:23:46.532 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:46.532 "strip_size_kb": 0, 00:23:46.532 "state": "online", 00:23:46.532 "raid_level": "raid1", 00:23:46.532 "superblock": true, 00:23:46.532 "num_base_bdevs": 2, 00:23:46.532 "num_base_bdevs_discovered": 1, 00:23:46.532 "num_base_bdevs_operational": 1, 00:23:46.532 "base_bdevs_list": [ 00:23:46.532 { 00:23:46.532 "name": null, 00:23:46.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.532 "is_configured": false, 00:23:46.532 "data_offset": 256, 00:23:46.532 "data_size": 7936 00:23:46.532 }, 00:23:46.532 { 00:23:46.532 "name": "BaseBdev2", 00:23:46.532 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:46.532 "is_configured": true, 00:23:46.532 "data_offset": 256, 00:23:46.532 "data_size": 7936 00:23:46.532 } 00:23:46.532 ] 00:23:46.532 }' 00:23:46.532 18:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:46.532 18:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.200 18:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:47.200 [2024-07-24 18:26:55.768682] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:47.200 [2024-07-24 18:26:55.768793] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:47.200 [2024-07-24 18:26:55.768804] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:47.200 [2024-07-24 18:26:55.768825] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:47.200 [2024-07-24 18:26:55.771920] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1012df0 00:23:47.200 [2024-07-24 18:26:55.773527] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:47.200 18:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:23:48.577 18:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.577 18:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:48.577 18:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:48.577 18:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:48.577 18:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:48.577 18:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.577 18:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.577 18:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:48.577 "name": "raid_bdev1", 00:23:48.577 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:48.577 "strip_size_kb": 0, 00:23:48.577 "state": "online", 00:23:48.577 "raid_level": "raid1", 00:23:48.577 "superblock": true, 00:23:48.577 "num_base_bdevs": 2, 00:23:48.577 "num_base_bdevs_discovered": 2, 00:23:48.577 "num_base_bdevs_operational": 2, 00:23:48.577 "process": { 00:23:48.577 "type": "rebuild", 00:23:48.577 "target": "spare", 00:23:48.577 "progress": { 00:23:48.577 "blocks": 2816, 00:23:48.577 "percent": 35 00:23:48.577 } 00:23:48.577 }, 00:23:48.577 "base_bdevs_list": [ 00:23:48.577 { 00:23:48.577 "name": "spare", 00:23:48.577 "uuid": "bebe433b-f731-59a1-aba3-eca74ae63fae", 00:23:48.577 "is_configured": true, 00:23:48.577 "data_offset": 256, 00:23:48.577 "data_size": 7936 00:23:48.577 }, 00:23:48.577 { 00:23:48.577 "name": "BaseBdev2", 00:23:48.577 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:48.577 "is_configured": true, 00:23:48.577 "data_offset": 256, 00:23:48.577 "data_size": 7936 00:23:48.577 } 00:23:48.577 ] 00:23:48.577 }' 00:23:48.577 18:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:48.577 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:48.577 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:48.577 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:48.577 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:48.837 [2024-07-24 18:26:57.197682] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:48.837 [2024-07-24 18:26:57.283916] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:48.837 [2024-07-24 18:26:57.283948] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.837 [2024-07-24 18:26:57.283958] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:48.837 [2024-07-24 18:26:57.283980] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:48.837 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:48.837 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:48.837 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:48.837 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:48.837 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:48.837 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:48.837 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:48.837 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:48.837 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:48.837 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:48.837 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.837 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.096 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:49.096 "name": "raid_bdev1", 00:23:49.096 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:49.096 "strip_size_kb": 0, 00:23:49.096 "state": "online", 00:23:49.096 "raid_level": "raid1", 00:23:49.096 "superblock": true, 00:23:49.096 "num_base_bdevs": 2, 00:23:49.096 "num_base_bdevs_discovered": 1, 00:23:49.096 "num_base_bdevs_operational": 1, 00:23:49.096 "base_bdevs_list": [ 00:23:49.096 { 00:23:49.096 "name": null, 00:23:49.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.096 "is_configured": false, 00:23:49.096 "data_offset": 256, 00:23:49.096 "data_size": 7936 00:23:49.096 }, 00:23:49.096 { 00:23:49.096 "name": "BaseBdev2", 00:23:49.096 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:49.096 "is_configured": true, 00:23:49.096 "data_offset": 256, 00:23:49.096 "data_size": 7936 00:23:49.096 } 00:23:49.096 ] 00:23:49.096 }' 00:23:49.096 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:49.096 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.664 18:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:49.664 [2024-07-24 18:26:58.117648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:49.664 [2024-07-24 18:26:58.117688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.664 [2024-07-24 18:26:58.117719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe7a720 00:23:49.664 [2024-07-24 18:26:58.117728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.664 [2024-07-24 18:26:58.117871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.664 [2024-07-24 18:26:58.117881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:49.664 [2024-07-24 18:26:58.117920] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:49.664 [2024-07-24 18:26:58.117928] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:49.664 [2024-07-24 18:26:58.117935] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:49.664 [2024-07-24 18:26:58.117948] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:49.664 [2024-07-24 18:26:58.121029] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1008ce0 00:23:49.664 [2024-07-24 18:26:58.122080] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:49.664 spare 00:23:49.664 18:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:23:50.601 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:50.601 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:50.601 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:50.601 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:50.601 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:50.601 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.601 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.860 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:50.860 "name": "raid_bdev1", 00:23:50.860 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:50.860 "strip_size_kb": 0, 00:23:50.860 "state": "online", 00:23:50.860 "raid_level": "raid1", 00:23:50.860 "superblock": true, 00:23:50.860 "num_base_bdevs": 2, 00:23:50.860 "num_base_bdevs_discovered": 2, 00:23:50.860 "num_base_bdevs_operational": 2, 00:23:50.860 "process": { 00:23:50.860 "type": "rebuild", 00:23:50.860 "target": "spare", 00:23:50.860 "progress": { 00:23:50.860 "blocks": 2816, 00:23:50.860 "percent": 35 00:23:50.860 } 00:23:50.860 }, 00:23:50.860 "base_bdevs_list": [ 00:23:50.860 { 00:23:50.860 "name": "spare", 00:23:50.860 "uuid": "bebe433b-f731-59a1-aba3-eca74ae63fae", 00:23:50.860 "is_configured": true, 00:23:50.860 "data_offset": 256, 00:23:50.860 "data_size": 7936 00:23:50.860 }, 00:23:50.860 { 00:23:50.860 "name": "BaseBdev2", 00:23:50.860 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:50.860 "is_configured": true, 00:23:50.860 "data_offset": 256, 00:23:50.860 "data_size": 7936 00:23:50.860 } 00:23:50.860 ] 00:23:50.860 }' 00:23:50.860 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:50.860 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:50.860 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:50.860 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:50.860 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:51.120 [2024-07-24 18:26:59.546263] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:51.120 [2024-07-24 18:26:59.632485] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:51.120 [2024-07-24 18:26:59.632516] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.120 [2024-07-24 18:26:59.632526] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:51.120 [2024-07-24 18:26:59.632531] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:51.120 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:51.120 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:51.120 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:51.120 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:51.120 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:51.120 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:51.120 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.120 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.120 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.120 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.120 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.120 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.379 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.379 "name": "raid_bdev1", 00:23:51.379 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:51.379 "strip_size_kb": 0, 00:23:51.379 "state": "online", 00:23:51.379 "raid_level": "raid1", 00:23:51.379 "superblock": true, 00:23:51.379 "num_base_bdevs": 2, 00:23:51.379 "num_base_bdevs_discovered": 1, 00:23:51.379 "num_base_bdevs_operational": 1, 00:23:51.379 "base_bdevs_list": [ 00:23:51.379 { 00:23:51.379 "name": null, 00:23:51.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.379 "is_configured": false, 00:23:51.379 "data_offset": 256, 00:23:51.379 "data_size": 7936 00:23:51.379 }, 00:23:51.379 { 00:23:51.379 "name": "BaseBdev2", 00:23:51.379 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:51.379 "is_configured": true, 00:23:51.379 "data_offset": 256, 00:23:51.379 "data_size": 7936 00:23:51.379 } 00:23:51.379 ] 00:23:51.379 }' 00:23:51.379 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.379 18:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.946 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:51.946 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:51.946 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:51.946 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:51.946 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:51.946 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.946 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.946 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:51.946 "name": "raid_bdev1", 00:23:51.946 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:51.946 "strip_size_kb": 0, 00:23:51.946 "state": "online", 00:23:51.946 "raid_level": "raid1", 00:23:51.946 "superblock": true, 00:23:51.946 "num_base_bdevs": 2, 00:23:51.946 "num_base_bdevs_discovered": 1, 00:23:51.946 "num_base_bdevs_operational": 1, 00:23:51.946 "base_bdevs_list": [ 00:23:51.946 { 00:23:51.946 "name": null, 00:23:51.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.946 "is_configured": false, 00:23:51.946 "data_offset": 256, 00:23:51.946 "data_size": 7936 00:23:51.946 }, 00:23:51.946 { 00:23:51.946 "name": "BaseBdev2", 00:23:51.946 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:51.946 "is_configured": true, 00:23:51.946 "data_offset": 256, 00:23:51.946 "data_size": 7936 00:23:51.946 } 00:23:51.946 ] 00:23:51.946 }' 00:23:51.946 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:51.946 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:51.946 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:52.205 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:52.205 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:52.205 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:52.465 [2024-07-24 18:27:00.863075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:52.465 [2024-07-24 18:27:00.863108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:52.465 [2024-07-24 18:27:00.863122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe7b600 00:23:52.465 [2024-07-24 18:27:00.863146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:52.465 [2024-07-24 18:27:00.863275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:52.465 [2024-07-24 18:27:00.863286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:52.465 [2024-07-24 18:27:00.863316] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:52.465 [2024-07-24 18:27:00.863324] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:52.465 [2024-07-24 18:27:00.863331] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:52.465 BaseBdev1 00:23:52.465 18:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:23:53.402 18:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:53.402 18:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:53.402 18:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:53.402 18:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:53.402 18:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:53.402 18:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:53.402 18:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:53.402 18:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:53.402 18:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:53.402 18:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:53.402 18:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.402 18:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.661 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:53.661 "name": "raid_bdev1", 00:23:53.661 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:53.661 "strip_size_kb": 0, 00:23:53.661 "state": "online", 00:23:53.661 "raid_level": "raid1", 00:23:53.661 "superblock": true, 00:23:53.661 "num_base_bdevs": 2, 00:23:53.661 "num_base_bdevs_discovered": 1, 00:23:53.661 "num_base_bdevs_operational": 1, 00:23:53.661 "base_bdevs_list": [ 00:23:53.661 { 00:23:53.661 "name": null, 00:23:53.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.661 "is_configured": false, 00:23:53.661 "data_offset": 256, 00:23:53.661 "data_size": 7936 00:23:53.661 }, 00:23:53.661 { 00:23:53.661 "name": "BaseBdev2", 00:23:53.661 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:53.661 "is_configured": true, 00:23:53.661 "data_offset": 256, 00:23:53.661 "data_size": 7936 00:23:53.661 } 00:23:53.661 ] 00:23:53.661 }' 00:23:53.661 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:53.661 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.229 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:54.229 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:54.229 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:54.229 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:54.229 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:54.229 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.229 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.229 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:54.229 "name": "raid_bdev1", 00:23:54.229 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:54.229 "strip_size_kb": 0, 00:23:54.229 "state": "online", 00:23:54.229 "raid_level": "raid1", 00:23:54.229 "superblock": true, 00:23:54.229 "num_base_bdevs": 2, 00:23:54.229 "num_base_bdevs_discovered": 1, 00:23:54.229 "num_base_bdevs_operational": 1, 00:23:54.229 "base_bdevs_list": [ 00:23:54.229 { 00:23:54.229 "name": null, 00:23:54.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.229 "is_configured": false, 00:23:54.229 "data_offset": 256, 00:23:54.229 "data_size": 7936 00:23:54.229 }, 00:23:54.229 { 00:23:54.229 "name": "BaseBdev2", 00:23:54.229 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:54.229 "is_configured": true, 00:23:54.229 "data_offset": 256, 00:23:54.229 "data_size": 7936 00:23:54.230 } 00:23:54.230 ] 00:23:54.230 }' 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:54.230 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:54.489 [2024-07-24 18:27:02.976545] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:54.489 [2024-07-24 18:27:02.976643] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:54.489 [2024-07-24 18:27:02.976670] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:54.489 request: 00:23:54.489 { 00:23:54.489 "base_bdev": "BaseBdev1", 00:23:54.489 "raid_bdev": "raid_bdev1", 00:23:54.489 "method": "bdev_raid_add_base_bdev", 00:23:54.489 "req_id": 1 00:23:54.489 } 00:23:54.489 Got JSON-RPC error response 00:23:54.489 response: 00:23:54.489 { 00:23:54.489 "code": -22, 00:23:54.489 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:54.489 } 00:23:54.489 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:23:54.489 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:54.489 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:54.489 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:54.489 18:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:23:55.423 18:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:55.423 18:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:55.423 18:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:55.423 18:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:55.423 18:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:55.423 18:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:55.423 18:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:55.423 18:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:55.423 18:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:55.423 18:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:55.423 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.423 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.681 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:55.681 "name": "raid_bdev1", 00:23:55.681 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:55.681 "strip_size_kb": 0, 00:23:55.681 "state": "online", 00:23:55.681 "raid_level": "raid1", 00:23:55.681 "superblock": true, 00:23:55.681 "num_base_bdevs": 2, 00:23:55.681 "num_base_bdevs_discovered": 1, 00:23:55.681 "num_base_bdevs_operational": 1, 00:23:55.681 "base_bdevs_list": [ 00:23:55.681 { 00:23:55.681 "name": null, 00:23:55.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.681 "is_configured": false, 00:23:55.681 "data_offset": 256, 00:23:55.681 "data_size": 7936 00:23:55.681 }, 00:23:55.681 { 00:23:55.681 "name": "BaseBdev2", 00:23:55.681 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:55.681 "is_configured": true, 00:23:55.681 "data_offset": 256, 00:23:55.681 "data_size": 7936 00:23:55.681 } 00:23:55.681 ] 00:23:55.681 }' 00:23:55.681 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:55.681 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.248 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:56.248 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:56.248 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:56.248 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:56.248 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:56.248 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.248 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:56.507 "name": "raid_bdev1", 00:23:56.507 "uuid": "d4266ef5-0151-48ca-9de3-7482f354bb03", 00:23:56.507 "strip_size_kb": 0, 00:23:56.507 "state": "online", 00:23:56.507 "raid_level": "raid1", 00:23:56.507 "superblock": true, 00:23:56.507 "num_base_bdevs": 2, 00:23:56.507 "num_base_bdevs_discovered": 1, 00:23:56.507 "num_base_bdevs_operational": 1, 00:23:56.507 "base_bdevs_list": [ 00:23:56.507 { 00:23:56.507 "name": null, 00:23:56.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.507 "is_configured": false, 00:23:56.507 "data_offset": 256, 00:23:56.507 "data_size": 7936 00:23:56.507 }, 00:23:56.507 { 00:23:56.507 "name": "BaseBdev2", 00:23:56.507 "uuid": "dd7a6366-6836-529f-8829-8b78cc7e6412", 00:23:56.507 "is_configured": true, 00:23:56.507 "data_offset": 256, 00:23:56.507 "data_size": 7936 00:23:56.507 } 00:23:56.507 ] 00:23:56.507 }' 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 2313152 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 2313152 ']' 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 2313152 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2313152 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2313152' 00:23:56.507 killing process with pid 2313152 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 2313152 00:23:56.507 Received shutdown signal, test time was about 60.000000 seconds 00:23:56.507 00:23:56.507 Latency(us) 00:23:56.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.507 =================================================================================================================== 00:23:56.507 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:56.507 [2024-07-24 18:27:04.991645] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:56.507 [2024-07-24 18:27:04.991715] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:56.507 [2024-07-24 18:27:04.991746] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:56.507 [2024-07-24 18:27:04.991753] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe79e50 name raid_bdev1, state offline 00:23:56.507 18:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 2313152 00:23:56.507 [2024-07-24 18:27:05.015074] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:56.767 18:27:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:23:56.767 00:23:56.767 real 0m23.917s 00:23:56.767 user 0m36.578s 00:23:56.767 sys 0m3.165s 00:23:56.767 18:27:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:56.767 18:27:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.767 ************************************ 00:23:56.767 END TEST raid_rebuild_test_sb_md_interleaved 00:23:56.767 ************************************ 00:23:56.767 18:27:05 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:23:56.767 18:27:05 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:23:56.767 18:27:05 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 2313152 ']' 00:23:56.767 18:27:05 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 2313152 00:23:56.767 18:27:05 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:23:56.767 00:23:56.767 real 14m18.389s 00:23:56.767 user 23m41.996s 00:23:56.767 sys 2m42.025s 00:23:56.767 18:27:05 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:56.767 18:27:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:56.767 ************************************ 00:23:56.767 END TEST bdev_raid 00:23:56.767 ************************************ 00:23:56.767 18:27:05 -- spdk/autotest.sh@195 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:23:56.767 18:27:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:56.767 18:27:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:56.767 18:27:05 -- common/autotest_common.sh@10 -- # set +x 00:23:56.767 ************************************ 00:23:56.767 START TEST bdevperf_config 00:23:56.767 ************************************ 00:23:56.767 18:27:05 bdevperf_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:23:57.026 * Looking for test storage... 00:23:57.026 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:57.026 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:57.026 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:57.026 18:27:05 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:57.027 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:57.027 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:57.027 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:57.027 18:27:05 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:59.565 18:27:08 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-24 18:27:05.542546] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:23:59.565 [2024-07-24 18:27:05.542607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2317795 ] 00:23:59.565 Using job config with 4 jobs 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.0 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.1 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.2 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.3 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.4 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.5 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.6 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.7 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:02.0 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:02.1 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:02.2 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:02.3 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:02.4 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:02.5 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:02.6 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:02.7 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:01.0 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:01.1 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:01.2 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:01.3 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:01.4 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:01.5 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:01.6 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:01.7 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:02.0 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:02.1 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:02.2 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:02.3 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:02.4 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:02.5 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:02.6 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b5:02.7 cannot be used 00:23:59.565 [2024-07-24 18:27:05.650667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.565 [2024-07-24 18:27:05.741303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.565 cpumask for '\''job0'\'' is too big 00:23:59.565 cpumask for '\''job1'\'' is too big 00:23:59.565 cpumask for '\''job2'\'' is too big 00:23:59.565 cpumask for '\''job3'\'' is too big 00:23:59.565 Running I/O for 2 seconds... 00:23:59.565 00:23:59.565 Latency(us) 00:23:59.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.565 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:59.565 Malloc0 : 2.01 38032.28 37.14 0.00 0.00 6725.96 1225.52 10066.33 00:23:59.565 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:59.565 Malloc0 : 2.01 38010.85 37.12 0.00 0.00 6720.08 1133.77 8860.47 00:23:59.565 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:59.565 Malloc0 : 2.01 37989.64 37.10 0.00 0.00 6714.79 1127.22 7759.46 00:23:59.565 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:59.565 Malloc0 : 2.02 37967.12 37.08 0.00 0.00 6709.50 1127.22 7182.75 00:23:59.565 =================================================================================================================== 00:23:59.565 Total : 151999.89 148.44 0.00 0.00 6717.58 1127.22 10066.33' 00:23:59.565 18:27:08 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-24 18:27:05.542546] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:23:59.565 [2024-07-24 18:27:05.542607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2317795 ] 00:23:59.565 Using job config with 4 jobs 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.0 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.1 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.2 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.3 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.4 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.5 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.565 EAL: Requested device 0000:b3:01.6 cannot be used 00:23:59.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:01.7 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.0 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.1 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.2 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.3 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.4 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.5 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.6 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.7 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.0 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.1 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.2 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.3 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.4 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.5 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.6 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.7 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.0 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.1 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.2 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.3 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.4 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.5 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.6 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.7 cannot be used 00:23:59.566 [2024-07-24 18:27:05.650667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.566 [2024-07-24 18:27:05.741303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.566 cpumask for '\''job0'\'' is too big 00:23:59.566 cpumask for '\''job1'\'' is too big 00:23:59.566 cpumask for '\''job2'\'' is too big 00:23:59.566 cpumask for '\''job3'\'' is too big 00:23:59.566 Running I/O for 2 seconds... 00:23:59.566 00:23:59.566 Latency(us) 00:23:59.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.566 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:59.566 Malloc0 : 2.01 38032.28 37.14 0.00 0.00 6725.96 1225.52 10066.33 00:23:59.566 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:59.566 Malloc0 : 2.01 38010.85 37.12 0.00 0.00 6720.08 1133.77 8860.47 00:23:59.566 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:59.566 Malloc0 : 2.01 37989.64 37.10 0.00 0.00 6714.79 1127.22 7759.46 00:23:59.566 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:59.566 Malloc0 : 2.02 37967.12 37.08 0.00 0.00 6709.50 1127.22 7182.75 00:23:59.566 =================================================================================================================== 00:23:59.566 Total : 151999.89 148.44 0.00 0.00 6717.58 1127.22 10066.33' 00:23:59.566 18:27:08 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-24 18:27:05.542546] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:23:59.566 [2024-07-24 18:27:05.542607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2317795 ] 00:23:59.566 Using job config with 4 jobs 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:01.0 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:01.1 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:01.2 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:01.3 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:01.4 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:01.5 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:01.6 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:01.7 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.0 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.1 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.2 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.3 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.4 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.5 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.6 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b3:02.7 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.0 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.1 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.2 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.3 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.4 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.5 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.6 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:01.7 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.0 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.1 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.2 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.3 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.4 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.5 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.6 cannot be used 00:23:59.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.566 EAL: Requested device 0000:b5:02.7 cannot be used 00:23:59.566 [2024-07-24 18:27:05.650667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.566 [2024-07-24 18:27:05.741303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.566 cpumask for '\''job0'\'' is too big 00:23:59.566 cpumask for '\''job1'\'' is too big 00:23:59.566 cpumask for '\''job2'\'' is too big 00:23:59.566 cpumask for '\''job3'\'' is too big 00:23:59.566 Running I/O for 2 seconds... 00:23:59.566 00:23:59.566 Latency(us) 00:23:59.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.566 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:59.566 Malloc0 : 2.01 38032.28 37.14 0.00 0.00 6725.96 1225.52 10066.33 00:23:59.566 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:59.566 Malloc0 : 2.01 38010.85 37.12 0.00 0.00 6720.08 1133.77 8860.47 00:23:59.566 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:59.566 Malloc0 : 2.01 37989.64 37.10 0.00 0.00 6714.79 1127.22 7759.46 00:23:59.567 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:59.567 Malloc0 : 2.02 37967.12 37.08 0.00 0.00 6709.50 1127.22 7182.75 00:23:59.567 =================================================================================================================== 00:23:59.567 Total : 151999.89 148.44 0.00 0.00 6717.58 1127.22 10066.33' 00:23:59.567 18:27:08 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:59.567 18:27:08 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:59.567 18:27:08 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:23:59.567 18:27:08 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:59.567 [2024-07-24 18:27:08.148802] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:23:59.567 [2024-07-24 18:27:08.148855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2318336 ] 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:01.0 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:01.1 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:01.2 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:01.3 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:01.4 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:01.5 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:01.6 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:01.7 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:02.0 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:02.1 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:02.2 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:02.3 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:02.4 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:02.5 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:02.6 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b3:02.7 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:01.0 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:01.1 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:01.2 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:01.3 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:01.4 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:01.5 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:01.6 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:01.7 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:02.0 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:02.1 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:02.2 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:02.3 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:02.4 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:02.5 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:02.6 cannot be used 00:23:59.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:59.826 EAL: Requested device 0000:b5:02.7 cannot be used 00:23:59.826 [2024-07-24 18:27:08.265205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.826 [2024-07-24 18:27:08.360383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.085 cpumask for 'job0' is too big 00:24:00.085 cpumask for 'job1' is too big 00:24:00.085 cpumask for 'job2' is too big 00:24:00.085 cpumask for 'job3' is too big 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:24:02.619 Running I/O for 2 seconds... 00:24:02.619 00:24:02.619 Latency(us) 00:24:02.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.619 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:02.619 Malloc0 : 2.01 37274.63 36.40 0.00 0.00 6862.77 1159.99 10013.90 00:24:02.619 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:02.619 Malloc0 : 2.01 37253.64 36.38 0.00 0.00 6857.02 1159.99 8965.32 00:24:02.619 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:02.619 Malloc0 : 2.01 37232.04 36.36 0.00 0.00 6851.47 1146.88 7864.32 00:24:02.619 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:02.619 Malloc0 : 2.02 37209.45 36.34 0.00 0.00 6845.79 1140.33 7340.03 00:24:02.619 =================================================================================================================== 00:24:02.619 Total : 148969.76 145.48 0.00 0.00 6854.26 1140.33 10013.90' 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:02.619 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:02.619 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:02.619 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:02.619 18:27:10 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:24:05.154 18:27:13 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-24 18:27:10.784832] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:24:05.154 [2024-07-24 18:27:10.784895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319034 ] 00:24:05.154 Using job config with 3 jobs 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.0 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.1 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.2 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.3 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.4 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.5 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.6 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.7 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:02.0 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:02.1 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:02.2 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:02.3 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:02.4 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:02.5 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:02.6 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:02.7 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:01.0 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:01.1 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:01.2 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:01.3 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:01.4 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:01.5 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:01.6 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:01.7 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:02.0 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:02.1 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:02.2 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:02.3 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:02.4 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:02.5 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:02.6 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b5:02.7 cannot be used 00:24:05.154 [2024-07-24 18:27:10.886273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.154 [2024-07-24 18:27:10.963696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.154 cpumask for '\''job0'\'' is too big 00:24:05.154 cpumask for '\''job1'\'' is too big 00:24:05.154 cpumask for '\''job2'\'' is too big 00:24:05.154 Running I/O for 2 seconds... 00:24:05.154 00:24:05.154 Latency(us) 00:24:05.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.154 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:05.154 Malloc0 : 2.01 51683.94 50.47 0.00 0.00 4950.36 1232.08 7811.89 00:24:05.154 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:05.154 Malloc0 : 2.01 51694.95 50.48 0.00 0.00 4941.83 1245.18 6553.60 00:24:05.154 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:05.154 Malloc0 : 2.01 51664.55 50.45 0.00 0.00 4937.62 1212.42 5609.88 00:24:05.154 =================================================================================================================== 00:24:05.154 Total : 155043.44 151.41 0.00 0.00 4943.26 1212.42 7811.89' 00:24:05.154 18:27:13 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-24 18:27:10.784832] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:24:05.154 [2024-07-24 18:27:10.784895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319034 ] 00:24:05.154 Using job config with 3 jobs 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.0 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.1 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.2 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.3 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.4 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.5 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.6 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:01.7 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:02.0 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:02.1 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:02.2 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:02.3 cannot be used 00:24:05.154 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.154 EAL: Requested device 0000:b3:02.4 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:02.5 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:02.6 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:02.7 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.0 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.1 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.2 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.3 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.4 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.5 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.6 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.7 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.0 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.1 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.2 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.3 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.4 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.5 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.6 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.7 cannot be used 00:24:05.155 [2024-07-24 18:27:10.886273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.155 [2024-07-24 18:27:10.963696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.155 cpumask for '\''job0'\'' is too big 00:24:05.155 cpumask for '\''job1'\'' is too big 00:24:05.155 cpumask for '\''job2'\'' is too big 00:24:05.155 Running I/O for 2 seconds... 00:24:05.155 00:24:05.155 Latency(us) 00:24:05.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.155 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:05.155 Malloc0 : 2.01 51683.94 50.47 0.00 0.00 4950.36 1232.08 7811.89 00:24:05.155 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:05.155 Malloc0 : 2.01 51694.95 50.48 0.00 0.00 4941.83 1245.18 6553.60 00:24:05.155 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:05.155 Malloc0 : 2.01 51664.55 50.45 0.00 0.00 4937.62 1212.42 5609.88 00:24:05.155 =================================================================================================================== 00:24:05.155 Total : 155043.44 151.41 0.00 0.00 4943.26 1212.42 7811.89' 00:24:05.155 18:27:13 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:05.155 18:27:13 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-24 18:27:10.784832] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:24:05.155 [2024-07-24 18:27:10.784895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319034 ] 00:24:05.155 Using job config with 3 jobs 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:01.0 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:01.1 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:01.2 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:01.3 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:01.4 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:01.5 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:01.6 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:01.7 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:02.0 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:02.1 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:02.2 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:02.3 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:02.4 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:02.5 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:02.6 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b3:02.7 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.0 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.1 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.2 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.3 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.4 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.5 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.6 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:01.7 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.0 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.1 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.2 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.3 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.4 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.5 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.6 cannot be used 00:24:05.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.155 EAL: Requested device 0000:b5:02.7 cannot be used 00:24:05.155 [2024-07-24 18:27:10.886273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.155 [2024-07-24 18:27:10.963696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.155 cpumask for '\''job0'\'' is too big 00:24:05.155 cpumask for '\''job1'\'' is too big 00:24:05.155 cpumask for '\''job2'\'' is too big 00:24:05.155 Running I/O for 2 seconds... 00:24:05.155 00:24:05.155 Latency(us) 00:24:05.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.155 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:05.155 Malloc0 : 2.01 51683.94 50.47 0.00 0.00 4950.36 1232.08 7811.89 00:24:05.155 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:05.155 Malloc0 : 2.01 51694.95 50.48 0.00 0.00 4941.83 1245.18 6553.60 00:24:05.155 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:05.155 Malloc0 : 2.01 51664.55 50.45 0.00 0.00 4937.62 1212.42 5609.88 00:24:05.155 =================================================================================================================== 00:24:05.155 Total : 155043.44 151.41 0.00 0.00 4943.26 1212.42 7811.89' 00:24:05.155 18:27:13 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:05.155 18:27:13 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:24:05.155 18:27:13 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:24:05.155 18:27:13 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:24:05.155 18:27:13 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:24:05.155 18:27:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:24:05.155 18:27:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:05.156 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:05.156 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:05.156 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:05.156 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:24:05.156 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:24:05.156 18:27:13 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:24:07.698 18:27:15 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-24 18:27:13.396553] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:24:07.698 [2024-07-24 18:27:13.396599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319563 ] 00:24:07.698 Using job config with 4 jobs 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:01.0 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:01.1 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:01.2 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:01.3 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:01.4 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:01.5 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:01.6 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:01.7 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:02.0 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:02.1 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:02.2 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:02.3 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:02.4 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:02.5 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:02.6 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:02.7 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:01.0 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:01.1 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:01.2 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:01.3 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:01.4 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:01.5 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:01.6 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:01.7 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:02.0 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:02.1 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:02.2 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:02.3 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:02.4 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:02.5 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:02.6 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b5:02.7 cannot be used 00:24:07.698 [2024-07-24 18:27:13.497151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.698 [2024-07-24 18:27:13.582674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.698 cpumask for '\''job0'\'' is too big 00:24:07.698 cpumask for '\''job1'\'' is too big 00:24:07.698 cpumask for '\''job2'\'' is too big 00:24:07.698 cpumask for '\''job3'\'' is too big 00:24:07.698 Running I/O for 2 seconds... 00:24:07.698 00:24:07.698 Latency(us) 00:24:07.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.698 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.698 Malloc0 : 2.02 19239.53 18.79 0.00 0.00 13301.44 2372.40 20342.37 00:24:07.698 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.698 Malloc1 : 2.02 19228.50 18.78 0.00 0.00 13300.28 2870.48 20342.37 00:24:07.698 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.698 Malloc0 : 2.02 19217.81 18.77 0.00 0.00 13277.12 2424.83 17930.65 00:24:07.698 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.698 Malloc1 : 2.03 19206.80 18.76 0.00 0.00 13276.29 2870.48 17930.65 00:24:07.698 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.698 Malloc0 : 2.03 19196.15 18.75 0.00 0.00 13255.44 2333.08 15518.92 00:24:07.698 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.698 Malloc1 : 2.03 19185.29 18.74 0.00 0.00 13255.70 2844.26 15518.92 00:24:07.698 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.698 Malloc0 : 2.03 19174.57 18.73 0.00 0.00 13235.20 2333.08 14889.78 00:24:07.698 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.698 Malloc1 : 2.03 19163.71 18.71 0.00 0.00 13233.56 2844.26 14994.64 00:24:07.698 =================================================================================================================== 00:24:07.698 Total : 153612.35 150.01 0.00 0.00 13266.88 2333.08 20342.37' 00:24:07.698 18:27:15 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-24 18:27:13.396553] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:24:07.698 [2024-07-24 18:27:13.396599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319563 ] 00:24:07.698 Using job config with 4 jobs 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:01.0 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:01.1 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:01.2 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:01.3 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.698 EAL: Requested device 0000:b3:01.4 cannot be used 00:24:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:01.5 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:01.6 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:01.7 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.0 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.1 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.2 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.3 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.4 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.5 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.6 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.7 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.0 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.1 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.2 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.3 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.4 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.5 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.6 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.7 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.0 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.1 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.2 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.3 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.4 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.5 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.6 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.7 cannot be used 00:24:07.699 [2024-07-24 18:27:13.497151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.699 [2024-07-24 18:27:13.582674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.699 cpumask for '\''job0'\'' is too big 00:24:07.699 cpumask for '\''job1'\'' is too big 00:24:07.699 cpumask for '\''job2'\'' is too big 00:24:07.699 cpumask for '\''job3'\'' is too big 00:24:07.699 Running I/O for 2 seconds... 00:24:07.699 00:24:07.699 Latency(us) 00:24:07.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.699 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.699 Malloc0 : 2.02 19239.53 18.79 0.00 0.00 13301.44 2372.40 20342.37 00:24:07.699 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.699 Malloc1 : 2.02 19228.50 18.78 0.00 0.00 13300.28 2870.48 20342.37 00:24:07.699 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.699 Malloc0 : 2.02 19217.81 18.77 0.00 0.00 13277.12 2424.83 17930.65 00:24:07.699 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.699 Malloc1 : 2.03 19206.80 18.76 0.00 0.00 13276.29 2870.48 17930.65 00:24:07.699 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.699 Malloc0 : 2.03 19196.15 18.75 0.00 0.00 13255.44 2333.08 15518.92 00:24:07.699 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.699 Malloc1 : 2.03 19185.29 18.74 0.00 0.00 13255.70 2844.26 15518.92 00:24:07.699 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.699 Malloc0 : 2.03 19174.57 18.73 0.00 0.00 13235.20 2333.08 14889.78 00:24:07.699 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.699 Malloc1 : 2.03 19163.71 18.71 0.00 0.00 13233.56 2844.26 14994.64 00:24:07.699 =================================================================================================================== 00:24:07.699 Total : 153612.35 150.01 0.00 0.00 13266.88 2333.08 20342.37' 00:24:07.699 18:27:15 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-24 18:27:13.396553] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:24:07.699 [2024-07-24 18:27:13.396599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319563 ] 00:24:07.699 Using job config with 4 jobs 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:01.0 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:01.1 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:01.2 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:01.3 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:01.4 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:01.5 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:01.6 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:01.7 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.0 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.1 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.2 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.3 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.4 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.5 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.6 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b3:02.7 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.0 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.1 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.2 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.3 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.4 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.5 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.6 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:01.7 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.0 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.1 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.2 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.3 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.4 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.5 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.6 cannot be used 00:24:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.699 EAL: Requested device 0000:b5:02.7 cannot be used 00:24:07.699 [2024-07-24 18:27:13.497151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.699 [2024-07-24 18:27:13.582674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.700 cpumask for '\''job0'\'' is too big 00:24:07.700 cpumask for '\''job1'\'' is too big 00:24:07.700 cpumask for '\''job2'\'' is too big 00:24:07.700 cpumask for '\''job3'\'' is too big 00:24:07.700 Running I/O for 2 seconds... 00:24:07.700 00:24:07.700 Latency(us) 00:24:07.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.700 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.700 Malloc0 : 2.02 19239.53 18.79 0.00 0.00 13301.44 2372.40 20342.37 00:24:07.700 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.700 Malloc1 : 2.02 19228.50 18.78 0.00 0.00 13300.28 2870.48 20342.37 00:24:07.700 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.700 Malloc0 : 2.02 19217.81 18.77 0.00 0.00 13277.12 2424.83 17930.65 00:24:07.700 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.700 Malloc1 : 2.03 19206.80 18.76 0.00 0.00 13276.29 2870.48 17930.65 00:24:07.700 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.700 Malloc0 : 2.03 19196.15 18.75 0.00 0.00 13255.44 2333.08 15518.92 00:24:07.700 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.700 Malloc1 : 2.03 19185.29 18.74 0.00 0.00 13255.70 2844.26 15518.92 00:24:07.700 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.700 Malloc0 : 2.03 19174.57 18.73 0.00 0.00 13235.20 2333.08 14889.78 00:24:07.700 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:07.700 Malloc1 : 2.03 19163.71 18.71 0.00 0.00 13233.56 2844.26 14994.64 00:24:07.700 =================================================================================================================== 00:24:07.700 Total : 153612.35 150.01 0.00 0.00 13266.88 2333.08 20342.37' 00:24:07.700 18:27:15 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:07.700 18:27:15 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:07.700 18:27:15 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:24:07.700 18:27:15 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:24:07.700 18:27:15 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:24:07.700 18:27:15 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:07.700 00:24:07.700 real 0m10.617s 00:24:07.700 user 0m9.471s 00:24:07.700 sys 0m0.984s 00:24:07.700 18:27:15 bdevperf_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:07.700 18:27:15 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:24:07.700 ************************************ 00:24:07.700 END TEST bdevperf_config 00:24:07.700 ************************************ 00:24:07.700 18:27:16 -- spdk/autotest.sh@196 -- # uname -s 00:24:07.700 18:27:16 -- spdk/autotest.sh@196 -- # [[ Linux == Linux ]] 00:24:07.700 18:27:16 -- spdk/autotest.sh@197 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:07.700 18:27:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:07.700 18:27:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:07.700 18:27:16 -- common/autotest_common.sh@10 -- # set +x 00:24:07.700 ************************************ 00:24:07.700 START TEST reactor_set_interrupt 00:24:07.700 ************************************ 00:24:07.700 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:07.700 * Looking for test storage... 00:24:07.700 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.700 18:27:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:24:07.700 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:07.700 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.700 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.700 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:24:07.700 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:07.700 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:24:07.700 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:07.700 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:24:07.700 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:07.700 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:07.700 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:24:07.700 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:24:07.700 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:24:07.700 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:24:07.700 18:27:16 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:24:07.701 18:27:16 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:24:07.701 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:07.701 #define SPDK_CONFIG_H 00:24:07.701 #define SPDK_CONFIG_APPS 1 00:24:07.701 #define SPDK_CONFIG_ARCH native 00:24:07.701 #undef SPDK_CONFIG_ASAN 00:24:07.701 #undef SPDK_CONFIG_AVAHI 00:24:07.701 #undef SPDK_CONFIG_CET 00:24:07.701 #define SPDK_CONFIG_COVERAGE 1 00:24:07.701 #define SPDK_CONFIG_CROSS_PREFIX 00:24:07.701 #define SPDK_CONFIG_CRYPTO 1 00:24:07.701 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:24:07.701 #undef SPDK_CONFIG_CUSTOMOCF 00:24:07.701 #undef SPDK_CONFIG_DAOS 00:24:07.701 #define SPDK_CONFIG_DAOS_DIR 00:24:07.701 #define SPDK_CONFIG_DEBUG 1 00:24:07.701 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:24:07.701 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:24:07.701 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:07.701 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:07.701 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:07.701 #undef SPDK_CONFIG_DPDK_UADK 00:24:07.701 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:24:07.701 #define SPDK_CONFIG_EXAMPLES 1 00:24:07.701 #undef SPDK_CONFIG_FC 00:24:07.701 #define SPDK_CONFIG_FC_PATH 00:24:07.701 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:07.701 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:07.701 #undef SPDK_CONFIG_FUSE 00:24:07.701 #undef SPDK_CONFIG_FUZZER 00:24:07.701 #define SPDK_CONFIG_FUZZER_LIB 00:24:07.701 #undef SPDK_CONFIG_GOLANG 00:24:07.701 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:24:07.701 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:24:07.701 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:07.701 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:24:07.701 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:07.701 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:07.701 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:07.701 #define SPDK_CONFIG_IDXD 1 00:24:07.701 #define SPDK_CONFIG_IDXD_KERNEL 1 00:24:07.701 #define SPDK_CONFIG_IPSEC_MB 1 00:24:07.701 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:24:07.701 #define SPDK_CONFIG_ISAL 1 00:24:07.701 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:07.701 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:07.701 #define SPDK_CONFIG_LIBDIR 00:24:07.701 #undef SPDK_CONFIG_LTO 00:24:07.701 #define SPDK_CONFIG_MAX_LCORES 128 00:24:07.701 #define SPDK_CONFIG_NVME_CUSE 1 00:24:07.701 #undef SPDK_CONFIG_OCF 00:24:07.701 #define SPDK_CONFIG_OCF_PATH 00:24:07.701 #define SPDK_CONFIG_OPENSSL_PATH 00:24:07.701 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:07.701 #define SPDK_CONFIG_PGO_DIR 00:24:07.701 #undef SPDK_CONFIG_PGO_USE 00:24:07.701 #define SPDK_CONFIG_PREFIX /usr/local 00:24:07.701 #undef SPDK_CONFIG_RAID5F 00:24:07.701 #undef SPDK_CONFIG_RBD 00:24:07.701 #define SPDK_CONFIG_RDMA 1 00:24:07.701 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:07.701 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:07.701 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:07.701 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:07.701 #define SPDK_CONFIG_SHARED 1 00:24:07.701 #undef SPDK_CONFIG_SMA 00:24:07.701 #define SPDK_CONFIG_TESTS 1 00:24:07.701 #undef SPDK_CONFIG_TSAN 00:24:07.701 #define SPDK_CONFIG_UBLK 1 00:24:07.701 #define SPDK_CONFIG_UBSAN 1 00:24:07.701 #undef SPDK_CONFIG_UNIT_TESTS 00:24:07.701 #undef SPDK_CONFIG_URING 00:24:07.701 #define SPDK_CONFIG_URING_PATH 00:24:07.701 #undef SPDK_CONFIG_URING_ZNS 00:24:07.701 #undef SPDK_CONFIG_USDT 00:24:07.701 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:24:07.701 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:24:07.701 #undef SPDK_CONFIG_VFIO_USER 00:24:07.701 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:07.701 #define SPDK_CONFIG_VHOST 1 00:24:07.701 #define SPDK_CONFIG_VIRTIO 1 00:24:07.701 #undef SPDK_CONFIG_VTUNE 00:24:07.701 #define SPDK_CONFIG_VTUNE_DIR 00:24:07.701 #define SPDK_CONFIG_WERROR 1 00:24:07.701 #define SPDK_CONFIG_WPDK_DIR 00:24:07.701 #undef SPDK_CONFIG_XNVME 00:24:07.701 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:07.701 18:27:16 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:07.701 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:24:07.701 18:27:16 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.701 18:27:16 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.701 18:27:16 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.701 18:27:16 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.701 18:27:16 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.701 18:27:16 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.701 18:27:16 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:24:07.702 18:27:16 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:24:07.702 18:27:16 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:24:07.702 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 1 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:24:07.703 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@202 -- # cat 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export valgrind= 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@265 -- # valgrind= 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@271 -- # uname -s 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@281 -- # MAKE=make 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:24:07.704 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@301 -- # TEST_MODE= 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@320 -- # [[ -z 2320017 ]] 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@320 -- # kill -0 2320017 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local mount target_dir 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.wELwOc 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.wELwOc/tests/interrupt /tmp/spdk.wELwOc 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@329 -- # df -T 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=951066624 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4333363200 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=50771951616 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742276608 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=10970324992 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30815502336 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=55635968 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=12338679808 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348456960 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=9777152 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30866563072 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4575232 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=6174220288 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174224384 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:24:07.965 * Looking for test storage... 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@370 -- # local target_space new_size 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mount=/ 00:24:07.965 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@376 -- # target_space=50771951616 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@383 -- # new_size=13184917504 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.966 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@391 -- # return 0 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2320150 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2320150 /var/tmp/spdk.sock 00:24:07.966 18:27:16 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 2320150 ']' 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:07.966 18:27:16 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:07.966 [2024-07-24 18:27:16.386432] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:24:07.966 [2024-07-24 18:27:16.386493] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320150 ] 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:01.0 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:01.1 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:01.2 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:01.3 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:01.4 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:01.5 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:01.6 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:01.7 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:02.0 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:02.1 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:02.2 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:02.3 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:02.4 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:02.5 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:02.6 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b3:02.7 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:01.0 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:01.1 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:01.2 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:01.3 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:01.4 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:01.5 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:01.6 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:01.7 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:02.0 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:02.1 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:02.2 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:02.3 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:02.4 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:02.5 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:02.6 cannot be used 00:24:07.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:07.966 EAL: Requested device 0000:b5:02.7 cannot be used 00:24:07.966 [2024-07-24 18:27:16.479863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:07.966 [2024-07-24 18:27:16.554317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.966 [2024-07-24 18:27:16.554409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.966 [2024-07-24 18:27:16.554409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.226 [2024-07-24 18:27:16.618137] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:08.795 18:27:17 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:08.795 18:27:17 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:24:08.795 18:27:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:24:08.795 18:27:17 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:08.795 Malloc0 00:24:08.795 Malloc1 00:24:08.795 Malloc2 00:24:08.795 18:27:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:24:08.795 18:27:17 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:24:08.795 18:27:17 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:09.055 5000+0 records in 00:24:09.055 5000+0 records out 00:24:09.055 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0269083 s, 381 MB/s 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:24:09.055 AIO0 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 2320150 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 2320150 without_thd 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2320150 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:09.055 18:27:17 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:09.316 18:27:17 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:24:09.316 18:27:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:09.316 18:27:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:09.316 18:27:17 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:24:09.316 18:27:17 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:09.316 18:27:17 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:24:09.316 18:27:17 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:09.316 18:27:17 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:09.316 18:27:17 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:09.641 18:27:17 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:24:09.642 18:27:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:09.642 18:27:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:09.642 spdk_thread ids are 1 on reactor0. 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2320150 0 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2320150 0 idle 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320150 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320150 -w 256 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320150 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.30 reactor_0' 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320150 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.30 reactor_0 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2320150 1 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2320150 1 idle 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320150 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320150 -w 256 00:24:09.642 18:27:18 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320176 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.00 reactor_1' 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320176 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.00 reactor_1 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2320150 2 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2320150 2 idle 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320150 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320150 -w 256 00:24:09.902 18:27:18 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320177 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.00 reactor_2' 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320177 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.00 reactor_2 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:24:10.162 [2024-07-24 18:27:18.703235] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:10.162 18:27:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:10.421 [2024-07-24 18:27:18.866940] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:10.421 [2024-07-24 18:27:18.867527] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:10.421 18:27:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:10.681 [2024-07-24 18:27:19.026829] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:10.681 [2024-07-24 18:27:19.026997] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2320150 0 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2320150 0 busy 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320150 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320150 -w 256 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320150 root 20 0 128.2g 36736 24192 R 93.8 0.1 0:00.65 reactor_0' 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320150 root 20 0 128.2g 36736 24192 R 93.8 0.1 0:00.65 reactor_0 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.8 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2320150 2 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2320150 2 busy 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320150 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320150 -w 256 00:24:10.681 18:27:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:10.940 18:27:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320177 root 20 0 128.2g 36736 24192 R 99.9 0.1 0:00.36 reactor_2' 00:24:10.940 18:27:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:10.940 18:27:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320177 root 20 0 128.2g 36736 24192 R 99.9 0.1 0:00.36 reactor_2 00:24:10.940 18:27:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:10.940 18:27:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:10.940 18:27:19 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:10.940 18:27:19 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:10.940 18:27:19 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:10.940 18:27:19 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:10.940 18:27:19 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:10.940 18:27:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:11.199 [2024-07-24 18:27:19.566826] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:11.199 [2024-07-24 18:27:19.566920] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2320150 2 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2320150 2 idle 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320150 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320150 -w 256 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320177 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.53 reactor_2' 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320177 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.53 reactor_2 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:11.199 18:27:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:11.458 [2024-07-24 18:27:19.914822] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:11.459 [2024-07-24 18:27:19.915110] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:11.459 18:27:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:24:11.459 18:27:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:24:11.459 18:27:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:24:11.718 [2024-07-24 18:27:20.087191] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2320150 0 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2320150 0 idle 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320150 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320150 -w 256 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320150 root 20 0 128.2g 36736 24192 S 6.7 0.1 0:01.36 reactor_0' 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320150 root 20 0 128.2g 36736 24192 S 6.7 0.1 0:01.36 reactor_0 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:24:11.718 18:27:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 2320150 00:24:11.718 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 2320150 ']' 00:24:11.718 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 2320150 00:24:11.718 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:24:11.718 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:11.718 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2320150 00:24:11.978 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:11.978 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:11.978 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2320150' 00:24:11.978 killing process with pid 2320150 00:24:11.978 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 2320150 00:24:11.978 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 2320150 00:24:11.978 18:27:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:24:11.978 18:27:20 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:24:11.978 18:27:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:24:11.978 18:27:20 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.978 18:27:20 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:24:11.978 18:27:20 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2320797 00:24:11.978 18:27:20 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:11.978 18:27:20 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.978 18:27:20 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2320797 /var/tmp/spdk.sock 00:24:11.978 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 2320797 ']' 00:24:11.978 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.978 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:11.978 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.978 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:11.978 18:27:20 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:12.238 [2024-07-24 18:27:20.605812] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:24:12.238 [2024-07-24 18:27:20.605860] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320797 ] 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:01.0 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:01.1 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:01.2 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:01.3 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:01.4 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:01.5 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:01.6 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:01.7 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:02.0 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:02.1 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:02.2 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:02.3 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:02.4 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:02.5 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:02.6 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b3:02.7 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:01.0 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:01.1 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:01.2 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:01.3 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:01.4 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:01.5 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:01.6 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:01.7 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:02.0 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:02.1 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:02.2 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:02.3 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:02.4 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:02.5 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:02.6 cannot be used 00:24:12.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.238 EAL: Requested device 0000:b5:02.7 cannot be used 00:24:12.238 [2024-07-24 18:27:20.699184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:12.238 [2024-07-24 18:27:20.770398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.238 [2024-07-24 18:27:20.770420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.238 [2024-07-24 18:27:20.770423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.497 [2024-07-24 18:27:20.835029] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:13.063 18:27:21 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.063 18:27:21 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:24:13.063 18:27:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:24:13.063 18:27:21 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:13.063 Malloc0 00:24:13.063 Malloc1 00:24:13.063 Malloc2 00:24:13.063 18:27:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:24:13.063 18:27:21 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:24:13.063 18:27:21 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:13.063 18:27:21 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:13.063 5000+0 records in 00:24:13.063 5000+0 records out 00:24:13.063 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0265264 s, 386 MB/s 00:24:13.063 18:27:21 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:24:13.321 AIO0 00:24:13.321 18:27:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 2320797 00:24:13.321 18:27:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 2320797 00:24:13.321 18:27:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=2320797 00:24:13.321 18:27:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:24:13.321 18:27:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:13.321 18:27:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:13.321 18:27:21 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:24:13.321 18:27:21 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:13.321 18:27:21 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:24:13.321 18:27:21 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:13.321 18:27:21 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:13.321 18:27:21 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:13.579 18:27:22 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:24:13.579 18:27:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:13.579 18:27:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:13.579 18:27:22 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:24:13.579 18:27:22 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:13.579 18:27:22 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:24:13.579 18:27:22 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:13.579 18:27:22 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:13.579 18:27:22 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:13.838 spdk_thread ids are 1 on reactor0. 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2320797 0 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2320797 0 idle 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320797 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320797 -w 256 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320797 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.30 reactor_0' 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320797 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.30 reactor_0 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2320797 1 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2320797 1 idle 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320797 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320797 -w 256 00:24:13.838 18:27:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320815 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1' 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320815 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 2320797 2 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2320797 2 idle 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320797 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320797 -w 256 00:24:14.097 18:27:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:14.357 18:27:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320816 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2' 00:24:14.357 18:27:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320816 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2 00:24:14.357 18:27:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:14.357 18:27:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:14.357 18:27:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:14.357 18:27:22 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:14.357 18:27:22 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:14.357 18:27:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:14.357 18:27:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:14.357 18:27:22 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:14.357 18:27:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:24:14.357 18:27:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:14.357 [2024-07-24 18:27:22.890963] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:14.357 [2024-07-24 18:27:22.891073] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:24:14.357 [2024-07-24 18:27:22.891150] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:14.357 18:27:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:14.616 [2024-07-24 18:27:23.067311] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:14.616 [2024-07-24 18:27:23.067668] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:14.616 18:27:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:14.616 18:27:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2320797 0 00:24:14.616 18:27:23 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2320797 0 busy 00:24:14.616 18:27:23 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320797 00:24:14.616 18:27:23 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:14.616 18:27:23 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:14.616 18:27:23 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:14.616 18:27:23 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:14.616 18:27:23 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:14.616 18:27:23 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:14.616 18:27:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320797 -w 256 00:24:14.616 18:27:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320797 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.65 reactor_0' 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320797 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.65 reactor_0 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 2320797 2 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 2320797 2 busy 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320797 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320797 -w 256 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320816 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.36 reactor_2' 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320816 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.36 reactor_2 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:14.907 18:27:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:15.168 [2024-07-24 18:27:23.596832] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:15.168 [2024-07-24 18:27:23.596920] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:15.168 18:27:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:24:15.168 18:27:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 2320797 2 00:24:15.168 18:27:23 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2320797 2 idle 00:24:15.168 18:27:23 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320797 00:24:15.168 18:27:23 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:15.168 18:27:23 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:15.168 18:27:23 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:15.168 18:27:23 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:15.168 18:27:23 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:15.168 18:27:23 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:15.168 18:27:23 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:15.168 18:27:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320797 -w 256 00:24:15.168 18:27:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:15.426 18:27:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320816 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.52 reactor_2' 00:24:15.426 18:27:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320816 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.52 reactor_2 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:15.427 [2024-07-24 18:27:23.953730] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:15.427 [2024-07-24 18:27:23.953830] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:24:15.427 [2024-07-24 18:27:23.953851] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 2320797 0 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 2320797 0 idle 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=2320797 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 2320797 -w 256 00:24:15.427 18:27:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='2320797 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.36 reactor_0' 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 2320797 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.36 reactor_0 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:15.686 18:27:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 2320797 00:24:15.686 18:27:24 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 2320797 ']' 00:24:15.686 18:27:24 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 2320797 00:24:15.686 18:27:24 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:24:15.686 18:27:24 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.686 18:27:24 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2320797 00:24:15.686 18:27:24 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:15.686 18:27:24 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:15.686 18:27:24 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2320797' 00:24:15.686 killing process with pid 2320797 00:24:15.686 18:27:24 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 2320797 00:24:15.686 18:27:24 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 2320797 00:24:15.945 18:27:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:24:15.945 18:27:24 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:24:15.945 00:24:15.945 real 0m8.370s 00:24:15.945 user 0m7.319s 00:24:15.945 sys 0m1.828s 00:24:15.945 18:27:24 reactor_set_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:15.945 18:27:24 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:15.945 ************************************ 00:24:15.945 END TEST reactor_set_interrupt 00:24:15.945 ************************************ 00:24:15.945 18:27:24 -- spdk/autotest.sh@198 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:15.945 18:27:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:15.945 18:27:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:15.945 18:27:24 -- common/autotest_common.sh@10 -- # set +x 00:24:15.945 ************************************ 00:24:15.945 START TEST reap_unregistered_poller 00:24:15.945 ************************************ 00:24:15.945 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:16.207 * Looking for test storage... 00:24:16.207 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:16.207 18:27:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:24:16.207 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:16.207 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:16.207 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:16.207 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:24:16.207 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:16.207 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:24:16.207 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:16.207 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:24:16.207 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:16.207 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:16.207 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:24:16.207 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:24:16.207 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:24:16.207 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:24:16.207 18:27:24 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:24:16.208 18:27:24 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:24:16.208 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:16.208 #define SPDK_CONFIG_H 00:24:16.208 #define SPDK_CONFIG_APPS 1 00:24:16.208 #define SPDK_CONFIG_ARCH native 00:24:16.208 #undef SPDK_CONFIG_ASAN 00:24:16.208 #undef SPDK_CONFIG_AVAHI 00:24:16.208 #undef SPDK_CONFIG_CET 00:24:16.208 #define SPDK_CONFIG_COVERAGE 1 00:24:16.208 #define SPDK_CONFIG_CROSS_PREFIX 00:24:16.208 #define SPDK_CONFIG_CRYPTO 1 00:24:16.208 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:24:16.208 #undef SPDK_CONFIG_CUSTOMOCF 00:24:16.208 #undef SPDK_CONFIG_DAOS 00:24:16.208 #define SPDK_CONFIG_DAOS_DIR 00:24:16.208 #define SPDK_CONFIG_DEBUG 1 00:24:16.208 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:24:16.208 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:24:16.208 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:16.208 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:16.208 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:16.208 #undef SPDK_CONFIG_DPDK_UADK 00:24:16.208 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:24:16.208 #define SPDK_CONFIG_EXAMPLES 1 00:24:16.208 #undef SPDK_CONFIG_FC 00:24:16.208 #define SPDK_CONFIG_FC_PATH 00:24:16.208 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:16.208 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:16.208 #undef SPDK_CONFIG_FUSE 00:24:16.208 #undef SPDK_CONFIG_FUZZER 00:24:16.208 #define SPDK_CONFIG_FUZZER_LIB 00:24:16.208 #undef SPDK_CONFIG_GOLANG 00:24:16.208 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:24:16.208 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:24:16.208 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:16.208 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:24:16.208 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:16.208 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:16.208 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:16.208 #define SPDK_CONFIG_IDXD 1 00:24:16.208 #define SPDK_CONFIG_IDXD_KERNEL 1 00:24:16.208 #define SPDK_CONFIG_IPSEC_MB 1 00:24:16.208 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:24:16.208 #define SPDK_CONFIG_ISAL 1 00:24:16.208 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:16.208 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:16.208 #define SPDK_CONFIG_LIBDIR 00:24:16.208 #undef SPDK_CONFIG_LTO 00:24:16.208 #define SPDK_CONFIG_MAX_LCORES 128 00:24:16.208 #define SPDK_CONFIG_NVME_CUSE 1 00:24:16.208 #undef SPDK_CONFIG_OCF 00:24:16.208 #define SPDK_CONFIG_OCF_PATH 00:24:16.208 #define SPDK_CONFIG_OPENSSL_PATH 00:24:16.208 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:16.208 #define SPDK_CONFIG_PGO_DIR 00:24:16.208 #undef SPDK_CONFIG_PGO_USE 00:24:16.208 #define SPDK_CONFIG_PREFIX /usr/local 00:24:16.208 #undef SPDK_CONFIG_RAID5F 00:24:16.208 #undef SPDK_CONFIG_RBD 00:24:16.208 #define SPDK_CONFIG_RDMA 1 00:24:16.208 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:16.208 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:16.208 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:16.208 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:16.208 #define SPDK_CONFIG_SHARED 1 00:24:16.208 #undef SPDK_CONFIG_SMA 00:24:16.208 #define SPDK_CONFIG_TESTS 1 00:24:16.208 #undef SPDK_CONFIG_TSAN 00:24:16.208 #define SPDK_CONFIG_UBLK 1 00:24:16.208 #define SPDK_CONFIG_UBSAN 1 00:24:16.208 #undef SPDK_CONFIG_UNIT_TESTS 00:24:16.208 #undef SPDK_CONFIG_URING 00:24:16.208 #define SPDK_CONFIG_URING_PATH 00:24:16.208 #undef SPDK_CONFIG_URING_ZNS 00:24:16.208 #undef SPDK_CONFIG_USDT 00:24:16.208 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:24:16.208 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:24:16.208 #undef SPDK_CONFIG_VFIO_USER 00:24:16.208 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:16.208 #define SPDK_CONFIG_VHOST 1 00:24:16.208 #define SPDK_CONFIG_VIRTIO 1 00:24:16.208 #undef SPDK_CONFIG_VTUNE 00:24:16.208 #define SPDK_CONFIG_VTUNE_DIR 00:24:16.208 #define SPDK_CONFIG_WERROR 1 00:24:16.208 #define SPDK_CONFIG_WPDK_DIR 00:24:16.208 #undef SPDK_CONFIG_XNVME 00:24:16.208 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:16.208 18:27:24 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:16.208 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:24:16.208 18:27:24 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.208 18:27:24 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.208 18:27:24 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.208 18:27:24 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.208 18:27:24 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.208 18:27:24 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.208 18:27:24 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:24:16.208 18:27:24 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.208 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:24:16.208 18:27:24 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:24:16.208 18:27:24 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:24:16.208 18:27:24 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:24:16.208 18:27:24 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:24:16.208 18:27:24 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:16.208 18:27:24 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:24:16.208 18:27:24 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:24:16.209 18:27:24 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 1 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:24:16.209 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@202 -- # cat 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export valgrind= 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@265 -- # valgrind= 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@271 -- # uname -s 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@281 -- # MAKE=make 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@301 -- # TEST_MODE= 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@320 -- # [[ -z 2321690 ]] 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@320 -- # kill -0 2321690 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local mount target_dir 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.MSxPBm 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.MSxPBm/tests/interrupt /tmp/spdk.MSxPBm 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@329 -- # df -T 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:24:16.210 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=951066624 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4333363200 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=50771771392 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742276608 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=10970505216 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30815502336 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=55635968 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=12338679808 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348456960 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=9777152 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30866563072 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871138304 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4575232 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=6174220288 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174224384 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:24:16.211 * Looking for test storage... 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@370 -- # local target_space new_size 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mount=/ 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@376 -- # target_space=50771771392 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@383 -- # new_size=13185097728 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:16.211 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@391 -- # return 0 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=2321731 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:16.211 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 2321731 /var/tmp/spdk.sock 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@831 -- # '[' -z 2321731 ']' 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.211 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.212 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.212 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.212 18:27:24 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:16.212 18:27:24 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:16.212 [2024-07-24 18:27:24.779846] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:24:16.212 [2024-07-24 18:27:24.779889] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2321731 ] 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:01.0 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:01.1 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:01.2 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:01.3 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:01.4 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:01.5 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:01.6 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:01.7 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:02.0 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:02.1 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:02.2 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:02.3 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:02.4 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:02.5 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:02.6 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b3:02.7 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:01.0 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:01.1 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:01.2 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:01.3 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:01.4 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:01.5 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:01.6 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:01.7 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:02.0 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:02.1 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:02.2 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:02.3 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:02.4 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:02.5 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:02.6 cannot be used 00:24:16.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:16.471 EAL: Requested device 0000:b5:02.7 cannot be used 00:24:16.471 [2024-07-24 18:27:24.874425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:16.471 [2024-07-24 18:27:24.949396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.471 [2024-07-24 18:27:24.949490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.471 [2024-07-24 18:27:24.949491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.471 [2024-07-24 18:27:25.013631] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:17.038 18:27:25 reap_unregistered_poller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.038 18:27:25 reap_unregistered_poller -- common/autotest_common.sh@864 -- # return 0 00:24:17.038 18:27:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:24:17.038 18:27:25 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.038 18:27:25 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:17.038 18:27:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:24:17.038 18:27:25 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.038 18:27:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:24:17.038 "name": "app_thread", 00:24:17.038 "id": 1, 00:24:17.038 "active_pollers": [], 00:24:17.038 "timed_pollers": [ 00:24:17.038 { 00:24:17.038 "name": "rpc_subsystem_poll_servers", 00:24:17.038 "id": 1, 00:24:17.038 "state": "waiting", 00:24:17.038 "run_count": 0, 00:24:17.038 "busy_count": 0, 00:24:17.038 "period_ticks": 10000000 00:24:17.038 } 00:24:17.038 ], 00:24:17.038 "paused_pollers": [] 00:24:17.038 }' 00:24:17.038 18:27:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:24:17.297 18:27:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:24:17.297 18:27:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:24:17.297 18:27:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:24:17.297 18:27:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:24:17.297 18:27:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:24:17.297 18:27:25 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:24:17.297 18:27:25 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:17.297 18:27:25 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:17.297 5000+0 records in 00:24:17.297 5000+0 records out 00:24:17.297 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0270993 s, 378 MB/s 00:24:17.297 18:27:25 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:24:17.555 AIO0 00:24:17.555 18:27:25 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:17.555 18:27:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:24:17.814 18:27:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:24:17.814 18:27:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:24:17.814 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.814 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:17.814 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.814 18:27:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:24:17.814 "name": "app_thread", 00:24:17.814 "id": 1, 00:24:17.814 "active_pollers": [], 00:24:17.814 "timed_pollers": [ 00:24:17.814 { 00:24:17.814 "name": "rpc_subsystem_poll_servers", 00:24:17.814 "id": 1, 00:24:17.814 "state": "waiting", 00:24:17.814 "run_count": 0, 00:24:17.814 "busy_count": 0, 00:24:17.814 "period_ticks": 10000000 00:24:17.814 } 00:24:17.814 ], 00:24:17.814 "paused_pollers": [] 00:24:17.814 }' 00:24:17.814 18:27:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:24:17.814 18:27:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:24:17.814 18:27:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:24:17.814 18:27:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:24:17.814 18:27:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:24:17.814 18:27:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:24:17.814 18:27:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:24:17.814 18:27:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 2321731 00:24:17.814 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@950 -- # '[' -z 2321731 ']' 00:24:17.814 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@954 -- # kill -0 2321731 00:24:17.814 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@955 -- # uname 00:24:17.814 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:17.814 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2321731 00:24:17.814 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:17.814 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:17.814 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2321731' 00:24:17.814 killing process with pid 2321731 00:24:17.814 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@969 -- # kill 2321731 00:24:17.814 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@974 -- # wait 2321731 00:24:18.073 18:27:26 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:24:18.073 18:27:26 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:24:18.073 00:24:18.073 real 0m2.037s 00:24:18.073 user 0m1.137s 00:24:18.073 sys 0m0.560s 00:24:18.073 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:18.073 18:27:26 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:18.073 ************************************ 00:24:18.073 END TEST reap_unregistered_poller 00:24:18.073 ************************************ 00:24:18.073 18:27:26 -- spdk/autotest.sh@202 -- # uname -s 00:24:18.073 18:27:26 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:24:18.073 18:27:26 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:24:18.073 18:27:26 -- spdk/autotest.sh@209 -- # [[ 1 -eq 0 ]] 00:24:18.073 18:27:26 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:24:18.073 18:27:26 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:24:18.073 18:27:26 -- spdk/autotest.sh@264 -- # timing_exit lib 00:24:18.073 18:27:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:18.074 18:27:26 -- common/autotest_common.sh@10 -- # set +x 00:24:18.074 18:27:26 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:24:18.074 18:27:26 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:24:18.074 18:27:26 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:24:18.074 18:27:26 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:24:18.074 18:27:26 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:24:18.074 18:27:26 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:24:18.074 18:27:26 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:24:18.074 18:27:26 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:24:18.074 18:27:26 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:24:18.074 18:27:26 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:24:18.074 18:27:26 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:24:18.074 18:27:26 -- spdk/autotest.sh@351 -- # '[' 1 -eq 1 ']' 00:24:18.074 18:27:26 -- spdk/autotest.sh@352 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:24:18.074 18:27:26 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:18.074 18:27:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:18.074 18:27:26 -- common/autotest_common.sh@10 -- # set +x 00:24:18.333 ************************************ 00:24:18.333 START TEST compress_compdev 00:24:18.333 ************************************ 00:24:18.333 18:27:26 compress_compdev -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:24:18.333 * Looking for test storage... 00:24:18.333 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:24:18.333 18:27:26 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.333 18:27:26 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:24:18.333 18:27:26 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.333 18:27:26 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.334 18:27:26 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.334 18:27:26 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.334 18:27:26 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.334 18:27:26 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.334 18:27:26 compress_compdev -- paths/export.sh@5 -- # export PATH 00:24:18.334 18:27:26 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.334 18:27:26 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:24:18.334 18:27:26 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:18.334 18:27:26 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:18.334 18:27:26 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.334 18:27:26 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.334 18:27:26 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.334 18:27:26 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:18.334 18:27:26 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:18.334 18:27:26 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:18.334 18:27:26 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:18.334 18:27:26 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:24:18.334 18:27:26 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:24:18.334 18:27:26 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:24:18.334 18:27:26 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:18.334 18:27:26 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2322101 00:24:18.334 18:27:26 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:18.334 18:27:26 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2322101 00:24:18.334 18:27:26 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 2322101 ']' 00:24:18.334 18:27:26 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.334 18:27:26 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:24:18.334 18:27:26 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:18.334 18:27:26 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.334 18:27:26 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:18.334 18:27:26 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:18.334 [2024-07-24 18:27:26.850584] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:24:18.334 [2024-07-24 18:27:26.850636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322101 ] 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:01.0 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:01.1 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:01.2 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:01.3 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:01.4 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:01.5 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:01.6 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:01.7 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:02.0 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:02.1 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:02.2 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:02.3 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:02.4 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:02.5 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:02.6 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b3:02.7 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:01.0 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:01.1 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:01.2 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:01.3 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:01.4 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:01.5 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:01.6 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:01.7 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:02.0 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:02.1 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:02.2 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:02.3 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:02.4 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:02.5 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:02.6 cannot be used 00:24:18.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:18.334 EAL: Requested device 0000:b5:02.7 cannot be used 00:24:18.594 [2024-07-24 18:27:26.943973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:18.594 [2024-07-24 18:27:27.020136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.594 [2024-07-24 18:27:27.020136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.162 [2024-07-24 18:27:27.536817] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:19.162 18:27:27 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:19.162 18:27:27 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:24:19.162 18:27:27 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:24:19.162 18:27:27 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:19.162 18:27:27 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:22.454 [2024-07-24 18:27:30.669831] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x208c940 PMD being used: compress_qat 00:24:22.454 18:27:30 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:22.454 18:27:30 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:24:22.454 18:27:30 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:22.454 18:27:30 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:24:22.454 18:27:30 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:22.454 18:27:30 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:22.454 18:27:30 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:22.454 18:27:30 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:22.454 [ 00:24:22.454 { 00:24:22.454 "name": "Nvme0n1", 00:24:22.454 "aliases": [ 00:24:22.454 "b1d6a084-b5f2-4304-93a4-ea35e166b499" 00:24:22.454 ], 00:24:22.454 "product_name": "NVMe disk", 00:24:22.454 "block_size": 512, 00:24:22.454 "num_blocks": 3907029168, 00:24:22.454 "uuid": "b1d6a084-b5f2-4304-93a4-ea35e166b499", 00:24:22.454 "assigned_rate_limits": { 00:24:22.454 "rw_ios_per_sec": 0, 00:24:22.454 "rw_mbytes_per_sec": 0, 00:24:22.454 "r_mbytes_per_sec": 0, 00:24:22.454 "w_mbytes_per_sec": 0 00:24:22.454 }, 00:24:22.454 "claimed": false, 00:24:22.454 "zoned": false, 00:24:22.454 "supported_io_types": { 00:24:22.454 "read": true, 00:24:22.454 "write": true, 00:24:22.454 "unmap": true, 00:24:22.454 "flush": true, 00:24:22.454 "reset": true, 00:24:22.454 "nvme_admin": true, 00:24:22.454 "nvme_io": true, 00:24:22.454 "nvme_io_md": false, 00:24:22.454 "write_zeroes": true, 00:24:22.454 "zcopy": false, 00:24:22.454 "get_zone_info": false, 00:24:22.454 "zone_management": false, 00:24:22.454 "zone_append": false, 00:24:22.454 "compare": false, 00:24:22.454 "compare_and_write": false, 00:24:22.454 "abort": true, 00:24:22.454 "seek_hole": false, 00:24:22.454 "seek_data": false, 00:24:22.454 "copy": false, 00:24:22.454 "nvme_iov_md": false 00:24:22.454 }, 00:24:22.454 "driver_specific": { 00:24:22.454 "nvme": [ 00:24:22.454 { 00:24:22.454 "pci_address": "0000:d8:00.0", 00:24:22.454 "trid": { 00:24:22.454 "trtype": "PCIe", 00:24:22.454 "traddr": "0000:d8:00.0" 00:24:22.454 }, 00:24:22.454 "ctrlr_data": { 00:24:22.454 "cntlid": 0, 00:24:22.454 "vendor_id": "0x8086", 00:24:22.454 "model_number": "INTEL SSDPE2KX020T8", 00:24:22.454 "serial_number": "BTLJ125504VE2P0BGN", 00:24:22.454 "firmware_revision": "VDV10170", 00:24:22.454 "oacs": { 00:24:22.454 "security": 0, 00:24:22.454 "format": 1, 00:24:22.454 "firmware": 1, 00:24:22.454 "ns_manage": 1 00:24:22.454 }, 00:24:22.454 "multi_ctrlr": false, 00:24:22.454 "ana_reporting": false 00:24:22.454 }, 00:24:22.454 "vs": { 00:24:22.454 "nvme_version": "1.2" 00:24:22.454 }, 00:24:22.454 "ns_data": { 00:24:22.454 "id": 1, 00:24:22.454 "can_share": false 00:24:22.454 } 00:24:22.454 } 00:24:22.454 ], 00:24:22.454 "mp_policy": "active_passive" 00:24:22.454 } 00:24:22.454 } 00:24:22.454 ] 00:24:22.454 18:27:31 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:24:22.454 18:27:31 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:22.714 [2024-07-24 18:27:31.189809] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1ef1650 PMD being used: compress_qat 00:24:23.651 8235223d-296e-4692-bff7-ebdf3eed0787 00:24:23.651 18:27:32 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:23.911 994ca102-0174-4281-90c7-c2dfece8ae2c 00:24:23.911 18:27:32 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:23.911 18:27:32 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:24:23.911 18:27:32 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:23.911 18:27:32 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:24:23.911 18:27:32 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:23.911 18:27:32 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:23.911 18:27:32 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:24.170 18:27:32 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:24.170 [ 00:24:24.170 { 00:24:24.170 "name": "994ca102-0174-4281-90c7-c2dfece8ae2c", 00:24:24.170 "aliases": [ 00:24:24.170 "lvs0/lv0" 00:24:24.170 ], 00:24:24.170 "product_name": "Logical Volume", 00:24:24.170 "block_size": 512, 00:24:24.170 "num_blocks": 204800, 00:24:24.170 "uuid": "994ca102-0174-4281-90c7-c2dfece8ae2c", 00:24:24.170 "assigned_rate_limits": { 00:24:24.170 "rw_ios_per_sec": 0, 00:24:24.170 "rw_mbytes_per_sec": 0, 00:24:24.170 "r_mbytes_per_sec": 0, 00:24:24.170 "w_mbytes_per_sec": 0 00:24:24.170 }, 00:24:24.170 "claimed": false, 00:24:24.170 "zoned": false, 00:24:24.170 "supported_io_types": { 00:24:24.170 "read": true, 00:24:24.170 "write": true, 00:24:24.170 "unmap": true, 00:24:24.170 "flush": false, 00:24:24.170 "reset": true, 00:24:24.170 "nvme_admin": false, 00:24:24.170 "nvme_io": false, 00:24:24.170 "nvme_io_md": false, 00:24:24.170 "write_zeroes": true, 00:24:24.170 "zcopy": false, 00:24:24.170 "get_zone_info": false, 00:24:24.170 "zone_management": false, 00:24:24.170 "zone_append": false, 00:24:24.170 "compare": false, 00:24:24.170 "compare_and_write": false, 00:24:24.170 "abort": false, 00:24:24.170 "seek_hole": true, 00:24:24.170 "seek_data": true, 00:24:24.170 "copy": false, 00:24:24.170 "nvme_iov_md": false 00:24:24.170 }, 00:24:24.170 "driver_specific": { 00:24:24.170 "lvol": { 00:24:24.170 "lvol_store_uuid": "8235223d-296e-4692-bff7-ebdf3eed0787", 00:24:24.170 "base_bdev": "Nvme0n1", 00:24:24.170 "thin_provision": true, 00:24:24.170 "num_allocated_clusters": 0, 00:24:24.170 "snapshot": false, 00:24:24.170 "clone": false, 00:24:24.170 "esnap_clone": false 00:24:24.170 } 00:24:24.170 } 00:24:24.170 } 00:24:24.170 ] 00:24:24.170 18:27:32 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:24:24.170 18:27:32 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:24:24.170 18:27:32 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:24:24.431 [2024-07-24 18:27:32.879538] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:24.431 COMP_lvs0/lv0 00:24:24.431 18:27:32 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:24.431 18:27:32 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:24:24.431 18:27:32 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:24.431 18:27:32 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:24:24.431 18:27:32 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:24.431 18:27:32 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:24.431 18:27:32 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:24.694 18:27:33 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:24.694 [ 00:24:24.694 { 00:24:24.694 "name": "COMP_lvs0/lv0", 00:24:24.694 "aliases": [ 00:24:24.694 "91ed8379-773e-5951-a22b-0b9325dd3371" 00:24:24.694 ], 00:24:24.694 "product_name": "compress", 00:24:24.694 "block_size": 512, 00:24:24.694 "num_blocks": 200704, 00:24:24.694 "uuid": "91ed8379-773e-5951-a22b-0b9325dd3371", 00:24:24.694 "assigned_rate_limits": { 00:24:24.694 "rw_ios_per_sec": 0, 00:24:24.694 "rw_mbytes_per_sec": 0, 00:24:24.694 "r_mbytes_per_sec": 0, 00:24:24.694 "w_mbytes_per_sec": 0 00:24:24.694 }, 00:24:24.694 "claimed": false, 00:24:24.694 "zoned": false, 00:24:24.694 "supported_io_types": { 00:24:24.694 "read": true, 00:24:24.694 "write": true, 00:24:24.694 "unmap": false, 00:24:24.694 "flush": false, 00:24:24.694 "reset": false, 00:24:24.694 "nvme_admin": false, 00:24:24.694 "nvme_io": false, 00:24:24.694 "nvme_io_md": false, 00:24:24.694 "write_zeroes": true, 00:24:24.694 "zcopy": false, 00:24:24.694 "get_zone_info": false, 00:24:24.694 "zone_management": false, 00:24:24.694 "zone_append": false, 00:24:24.694 "compare": false, 00:24:24.694 "compare_and_write": false, 00:24:24.694 "abort": false, 00:24:24.694 "seek_hole": false, 00:24:24.694 "seek_data": false, 00:24:24.694 "copy": false, 00:24:24.694 "nvme_iov_md": false 00:24:24.694 }, 00:24:24.694 "driver_specific": { 00:24:24.694 "compress": { 00:24:24.694 "name": "COMP_lvs0/lv0", 00:24:24.694 "base_bdev_name": "994ca102-0174-4281-90c7-c2dfece8ae2c", 00:24:24.694 "pm_path": "/tmp/pmem/72b0e6c5-966a-4baa-ad8c-5715d9545a5c" 00:24:24.694 } 00:24:24.694 } 00:24:24.694 } 00:24:24.694 ] 00:24:24.694 18:27:33 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:24:24.694 18:27:33 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:24.955 [2024-07-24 18:27:33.361547] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7effb81b15c0 PMD being used: compress_qat 00:24:24.955 [2024-07-24 18:27:33.363130] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2088d70 PMD being used: compress_qat 00:24:24.955 Running I/O for 3 seconds... 00:24:28.247 00:24:28.247 Latency(us) 00:24:28.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.247 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:24:28.247 Verification LBA range: start 0x0 length 0x3100 00:24:28.247 COMP_lvs0/lv0 : 3.00 4043.13 15.79 0.00 0.00 7881.18 127.80 14994.64 00:24:28.247 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:24:28.247 Verification LBA range: start 0x3100 length 0x3100 00:24:28.247 COMP_lvs0/lv0 : 3.00 4144.46 16.19 0.00 0.00 7690.62 119.60 14889.78 00:24:28.247 =================================================================================================================== 00:24:28.247 Total : 8187.60 31.98 0.00 0.00 7784.72 119.60 14994.64 00:24:28.247 0 00:24:28.247 18:27:36 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:24:28.247 18:27:36 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:28.247 18:27:36 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:28.247 18:27:36 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:28.247 18:27:36 compress_compdev -- compress/compress.sh@78 -- # killprocess 2322101 00:24:28.247 18:27:36 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 2322101 ']' 00:24:28.247 18:27:36 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 2322101 00:24:28.247 18:27:36 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:24:28.247 18:27:36 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.247 18:27:36 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2322101 00:24:28.247 18:27:36 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:28.247 18:27:36 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:28.247 18:27:36 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2322101' 00:24:28.247 killing process with pid 2322101 00:24:28.247 18:27:36 compress_compdev -- common/autotest_common.sh@969 -- # kill 2322101 00:24:28.247 Received shutdown signal, test time was about 3.000000 seconds 00:24:28.247 00:24:28.247 Latency(us) 00:24:28.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.247 =================================================================================================================== 00:24:28.247 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.247 18:27:36 compress_compdev -- common/autotest_common.sh@974 -- # wait 2322101 00:24:30.785 18:27:39 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:24:30.785 18:27:39 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:30.785 18:27:39 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2324252 00:24:30.785 18:27:39 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:30.785 18:27:39 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:24:30.785 18:27:39 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2324252 00:24:30.785 18:27:39 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 2324252 ']' 00:24:30.785 18:27:39 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.785 18:27:39 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.785 18:27:39 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.785 18:27:39 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.785 18:27:39 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:30.785 [2024-07-24 18:27:39.317840] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:24:30.785 [2024-07-24 18:27:39.317894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324252 ] 00:24:30.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.785 EAL: Requested device 0000:b3:01.0 cannot be used 00:24:30.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.785 EAL: Requested device 0000:b3:01.1 cannot be used 00:24:30.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.785 EAL: Requested device 0000:b3:01.2 cannot be used 00:24:30.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.785 EAL: Requested device 0000:b3:01.3 cannot be used 00:24:30.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.785 EAL: Requested device 0000:b3:01.4 cannot be used 00:24:30.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.785 EAL: Requested device 0000:b3:01.5 cannot be used 00:24:30.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.785 EAL: Requested device 0000:b3:01.6 cannot be used 00:24:30.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.785 EAL: Requested device 0000:b3:01.7 cannot be used 00:24:30.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.785 EAL: Requested device 0000:b3:02.0 cannot be used 00:24:30.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.785 EAL: Requested device 0000:b3:02.1 cannot be used 00:24:30.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.785 EAL: Requested device 0000:b3:02.2 cannot be used 00:24:30.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.785 EAL: Requested device 0000:b3:02.3 cannot be used 00:24:30.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b3:02.4 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b3:02.5 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b3:02.6 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b3:02.7 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:01.0 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:01.1 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:01.2 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:01.3 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:01.4 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:01.5 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:01.6 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:01.7 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:02.0 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:02.1 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:02.2 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:02.3 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:02.4 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:02.5 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:02.6 cannot be used 00:24:30.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:30.786 EAL: Requested device 0000:b5:02.7 cannot be used 00:24:31.045 [2024-07-24 18:27:39.411154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:31.045 [2024-07-24 18:27:39.485907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.045 [2024-07-24 18:27:39.485910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.613 [2024-07-24 18:27:39.997861] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:31.613 18:27:40 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.613 18:27:40 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:24:31.613 18:27:40 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:24:31.613 18:27:40 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:31.613 18:27:40 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:34.909 [2024-07-24 18:27:43.141926] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xfca940 PMD being used: compress_qat 00:24:34.909 18:27:43 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:34.909 18:27:43 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:24:34.909 18:27:43 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:34.909 18:27:43 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:24:34.909 18:27:43 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:34.909 18:27:43 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:34.909 18:27:43 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:34.909 18:27:43 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:35.173 [ 00:24:35.173 { 00:24:35.173 "name": "Nvme0n1", 00:24:35.173 "aliases": [ 00:24:35.173 "699a7621-b62b-4c98-b90b-a8c6b3c904ca" 00:24:35.173 ], 00:24:35.173 "product_name": "NVMe disk", 00:24:35.173 "block_size": 512, 00:24:35.173 "num_blocks": 3907029168, 00:24:35.173 "uuid": "699a7621-b62b-4c98-b90b-a8c6b3c904ca", 00:24:35.173 "assigned_rate_limits": { 00:24:35.173 "rw_ios_per_sec": 0, 00:24:35.173 "rw_mbytes_per_sec": 0, 00:24:35.173 "r_mbytes_per_sec": 0, 00:24:35.173 "w_mbytes_per_sec": 0 00:24:35.173 }, 00:24:35.173 "claimed": false, 00:24:35.173 "zoned": false, 00:24:35.173 "supported_io_types": { 00:24:35.173 "read": true, 00:24:35.173 "write": true, 00:24:35.173 "unmap": true, 00:24:35.173 "flush": true, 00:24:35.173 "reset": true, 00:24:35.173 "nvme_admin": true, 00:24:35.173 "nvme_io": true, 00:24:35.173 "nvme_io_md": false, 00:24:35.173 "write_zeroes": true, 00:24:35.173 "zcopy": false, 00:24:35.173 "get_zone_info": false, 00:24:35.173 "zone_management": false, 00:24:35.173 "zone_append": false, 00:24:35.173 "compare": false, 00:24:35.173 "compare_and_write": false, 00:24:35.173 "abort": true, 00:24:35.173 "seek_hole": false, 00:24:35.173 "seek_data": false, 00:24:35.173 "copy": false, 00:24:35.173 "nvme_iov_md": false 00:24:35.173 }, 00:24:35.173 "driver_specific": { 00:24:35.173 "nvme": [ 00:24:35.173 { 00:24:35.173 "pci_address": "0000:d8:00.0", 00:24:35.173 "trid": { 00:24:35.173 "trtype": "PCIe", 00:24:35.173 "traddr": "0000:d8:00.0" 00:24:35.173 }, 00:24:35.173 "ctrlr_data": { 00:24:35.173 "cntlid": 0, 00:24:35.173 "vendor_id": "0x8086", 00:24:35.173 "model_number": "INTEL SSDPE2KX020T8", 00:24:35.173 "serial_number": "BTLJ125504VE2P0BGN", 00:24:35.173 "firmware_revision": "VDV10170", 00:24:35.173 "oacs": { 00:24:35.173 "security": 0, 00:24:35.173 "format": 1, 00:24:35.173 "firmware": 1, 00:24:35.173 "ns_manage": 1 00:24:35.173 }, 00:24:35.173 "multi_ctrlr": false, 00:24:35.173 "ana_reporting": false 00:24:35.173 }, 00:24:35.173 "vs": { 00:24:35.173 "nvme_version": "1.2" 00:24:35.173 }, 00:24:35.173 "ns_data": { 00:24:35.173 "id": 1, 00:24:35.173 "can_share": false 00:24:35.173 } 00:24:35.174 } 00:24:35.174 ], 00:24:35.174 "mp_policy": "active_passive" 00:24:35.174 } 00:24:35.174 } 00:24:35.174 ] 00:24:35.174 18:27:43 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:24:35.174 18:27:43 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:35.174 [2024-07-24 18:27:43.670023] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xe2f650 PMD being used: compress_qat 00:24:36.111 1169123b-4d5d-4024-ba5d-85d24d4fc1a4 00:24:36.111 18:27:44 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:36.370 0cbbb59a-84a6-4f38-83c7-0958e52761e3 00:24:36.370 18:27:44 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:36.370 18:27:44 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:24:36.370 18:27:44 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:36.370 18:27:44 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:24:36.370 18:27:44 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:36.370 18:27:44 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:36.370 18:27:44 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:36.629 18:27:44 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:36.629 [ 00:24:36.629 { 00:24:36.629 "name": "0cbbb59a-84a6-4f38-83c7-0958e52761e3", 00:24:36.629 "aliases": [ 00:24:36.629 "lvs0/lv0" 00:24:36.629 ], 00:24:36.629 "product_name": "Logical Volume", 00:24:36.629 "block_size": 512, 00:24:36.629 "num_blocks": 204800, 00:24:36.629 "uuid": "0cbbb59a-84a6-4f38-83c7-0958e52761e3", 00:24:36.629 "assigned_rate_limits": { 00:24:36.629 "rw_ios_per_sec": 0, 00:24:36.629 "rw_mbytes_per_sec": 0, 00:24:36.629 "r_mbytes_per_sec": 0, 00:24:36.629 "w_mbytes_per_sec": 0 00:24:36.629 }, 00:24:36.629 "claimed": false, 00:24:36.629 "zoned": false, 00:24:36.629 "supported_io_types": { 00:24:36.629 "read": true, 00:24:36.629 "write": true, 00:24:36.629 "unmap": true, 00:24:36.629 "flush": false, 00:24:36.629 "reset": true, 00:24:36.629 "nvme_admin": false, 00:24:36.629 "nvme_io": false, 00:24:36.629 "nvme_io_md": false, 00:24:36.629 "write_zeroes": true, 00:24:36.629 "zcopy": false, 00:24:36.629 "get_zone_info": false, 00:24:36.629 "zone_management": false, 00:24:36.629 "zone_append": false, 00:24:36.629 "compare": false, 00:24:36.629 "compare_and_write": false, 00:24:36.629 "abort": false, 00:24:36.629 "seek_hole": true, 00:24:36.629 "seek_data": true, 00:24:36.629 "copy": false, 00:24:36.629 "nvme_iov_md": false 00:24:36.629 }, 00:24:36.629 "driver_specific": { 00:24:36.629 "lvol": { 00:24:36.629 "lvol_store_uuid": "1169123b-4d5d-4024-ba5d-85d24d4fc1a4", 00:24:36.629 "base_bdev": "Nvme0n1", 00:24:36.629 "thin_provision": true, 00:24:36.629 "num_allocated_clusters": 0, 00:24:36.629 "snapshot": false, 00:24:36.629 "clone": false, 00:24:36.629 "esnap_clone": false 00:24:36.629 } 00:24:36.629 } 00:24:36.629 } 00:24:36.629 ] 00:24:36.629 18:27:45 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:24:36.629 18:27:45 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:24:36.629 18:27:45 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:24:36.888 [2024-07-24 18:27:45.289065] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:36.888 COMP_lvs0/lv0 00:24:36.888 18:27:45 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:36.888 18:27:45 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:24:36.888 18:27:45 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:36.888 18:27:45 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:24:36.888 18:27:45 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:36.888 18:27:45 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:36.888 18:27:45 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:36.888 18:27:45 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:37.148 [ 00:24:37.148 { 00:24:37.148 "name": "COMP_lvs0/lv0", 00:24:37.148 "aliases": [ 00:24:37.148 "a416a180-11ff-5b49-8c56-a27849f4e0fa" 00:24:37.148 ], 00:24:37.148 "product_name": "compress", 00:24:37.148 "block_size": 512, 00:24:37.148 "num_blocks": 200704, 00:24:37.148 "uuid": "a416a180-11ff-5b49-8c56-a27849f4e0fa", 00:24:37.148 "assigned_rate_limits": { 00:24:37.148 "rw_ios_per_sec": 0, 00:24:37.148 "rw_mbytes_per_sec": 0, 00:24:37.148 "r_mbytes_per_sec": 0, 00:24:37.148 "w_mbytes_per_sec": 0 00:24:37.148 }, 00:24:37.148 "claimed": false, 00:24:37.148 "zoned": false, 00:24:37.148 "supported_io_types": { 00:24:37.148 "read": true, 00:24:37.148 "write": true, 00:24:37.148 "unmap": false, 00:24:37.148 "flush": false, 00:24:37.148 "reset": false, 00:24:37.148 "nvme_admin": false, 00:24:37.148 "nvme_io": false, 00:24:37.148 "nvme_io_md": false, 00:24:37.148 "write_zeroes": true, 00:24:37.148 "zcopy": false, 00:24:37.148 "get_zone_info": false, 00:24:37.148 "zone_management": false, 00:24:37.148 "zone_append": false, 00:24:37.148 "compare": false, 00:24:37.148 "compare_and_write": false, 00:24:37.148 "abort": false, 00:24:37.148 "seek_hole": false, 00:24:37.148 "seek_data": false, 00:24:37.148 "copy": false, 00:24:37.148 "nvme_iov_md": false 00:24:37.148 }, 00:24:37.148 "driver_specific": { 00:24:37.148 "compress": { 00:24:37.148 "name": "COMP_lvs0/lv0", 00:24:37.148 "base_bdev_name": "0cbbb59a-84a6-4f38-83c7-0958e52761e3", 00:24:37.148 "pm_path": "/tmp/pmem/efac34b0-352a-4eba-9d2d-0022a53df99b" 00:24:37.148 } 00:24:37.148 } 00:24:37.148 } 00:24:37.148 ] 00:24:37.148 18:27:45 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:24:37.148 18:27:45 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:37.148 [2024-07-24 18:27:45.722833] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fde401b15c0 PMD being used: compress_qat 00:24:37.148 [2024-07-24 18:27:45.724451] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xfc6fe0 PMD being used: compress_qat 00:24:37.148 Running I/O for 3 seconds... 00:24:40.438 00:24:40.438 Latency(us) 00:24:40.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.438 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:24:40.438 Verification LBA range: start 0x0 length 0x3100 00:24:40.438 COMP_lvs0/lv0 : 3.01 4255.03 16.62 0.00 0.00 7484.12 125.34 12635.34 00:24:40.438 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:24:40.438 Verification LBA range: start 0x3100 length 0x3100 00:24:40.438 COMP_lvs0/lv0 : 3.01 4363.65 17.05 0.00 0.00 7299.72 121.24 12740.20 00:24:40.438 =================================================================================================================== 00:24:40.438 Total : 8618.68 33.67 0.00 0.00 7390.77 121.24 12740.20 00:24:40.438 0 00:24:40.438 18:27:48 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:24:40.438 18:27:48 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:40.438 18:27:48 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:40.697 18:27:49 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:40.697 18:27:49 compress_compdev -- compress/compress.sh@78 -- # killprocess 2324252 00:24:40.697 18:27:49 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 2324252 ']' 00:24:40.697 18:27:49 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 2324252 00:24:40.697 18:27:49 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:24:40.697 18:27:49 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:40.697 18:27:49 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2324252 00:24:40.697 18:27:49 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:40.697 18:27:49 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:40.697 18:27:49 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2324252' 00:24:40.697 killing process with pid 2324252 00:24:40.697 18:27:49 compress_compdev -- common/autotest_common.sh@969 -- # kill 2324252 00:24:40.697 Received shutdown signal, test time was about 3.000000 seconds 00:24:40.697 00:24:40.697 Latency(us) 00:24:40.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.697 =================================================================================================================== 00:24:40.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.697 18:27:49 compress_compdev -- common/autotest_common.sh@974 -- # wait 2324252 00:24:43.234 18:27:51 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:24:43.234 18:27:51 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:43.234 18:27:51 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=2326405 00:24:43.234 18:27:51 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:43.234 18:27:51 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:24:43.234 18:27:51 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 2326405 00:24:43.234 18:27:51 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 2326405 ']' 00:24:43.234 18:27:51 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.234 18:27:51 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:43.234 18:27:51 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.234 18:27:51 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:43.235 18:27:51 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:43.235 [2024-07-24 18:27:51.681273] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:24:43.235 [2024-07-24 18:27:51.681321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2326405 ] 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:01.0 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:01.1 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:01.2 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:01.3 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:01.4 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:01.5 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:01.6 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:01.7 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:02.0 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:02.1 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:02.2 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:02.3 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:02.4 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:02.5 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:02.6 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b3:02.7 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:01.0 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:01.1 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:01.2 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:01.3 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:01.4 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:01.5 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:01.6 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:01.7 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:02.0 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:02.1 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:02.2 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:02.3 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:02.4 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:02.5 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:02.6 cannot be used 00:24:43.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.235 EAL: Requested device 0000:b5:02.7 cannot be used 00:24:43.235 [2024-07-24 18:27:51.773710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:43.495 [2024-07-24 18:27:51.849345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.495 [2024-07-24 18:27:51.849347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.062 [2024-07-24 18:27:52.358950] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:44.062 18:27:52 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:44.062 18:27:52 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:24:44.062 18:27:52 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:24:44.062 18:27:52 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:44.062 18:27:52 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:47.351 [2024-07-24 18:27:55.493897] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x21f8940 PMD being used: compress_qat 00:24:47.351 18:27:55 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:47.351 18:27:55 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:24:47.351 18:27:55 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:47.351 18:27:55 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:24:47.351 18:27:55 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:47.351 18:27:55 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:47.351 18:27:55 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:47.351 18:27:55 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:47.351 [ 00:24:47.351 { 00:24:47.351 "name": "Nvme0n1", 00:24:47.351 "aliases": [ 00:24:47.351 "2ac30cd0-d5d2-44e0-aa69-26d07cced238" 00:24:47.351 ], 00:24:47.351 "product_name": "NVMe disk", 00:24:47.351 "block_size": 512, 00:24:47.351 "num_blocks": 3907029168, 00:24:47.351 "uuid": "2ac30cd0-d5d2-44e0-aa69-26d07cced238", 00:24:47.351 "assigned_rate_limits": { 00:24:47.351 "rw_ios_per_sec": 0, 00:24:47.351 "rw_mbytes_per_sec": 0, 00:24:47.351 "r_mbytes_per_sec": 0, 00:24:47.351 "w_mbytes_per_sec": 0 00:24:47.351 }, 00:24:47.351 "claimed": false, 00:24:47.351 "zoned": false, 00:24:47.351 "supported_io_types": { 00:24:47.351 "read": true, 00:24:47.351 "write": true, 00:24:47.351 "unmap": true, 00:24:47.351 "flush": true, 00:24:47.351 "reset": true, 00:24:47.351 "nvme_admin": true, 00:24:47.351 "nvme_io": true, 00:24:47.351 "nvme_io_md": false, 00:24:47.351 "write_zeroes": true, 00:24:47.351 "zcopy": false, 00:24:47.351 "get_zone_info": false, 00:24:47.351 "zone_management": false, 00:24:47.351 "zone_append": false, 00:24:47.351 "compare": false, 00:24:47.351 "compare_and_write": false, 00:24:47.351 "abort": true, 00:24:47.351 "seek_hole": false, 00:24:47.351 "seek_data": false, 00:24:47.351 "copy": false, 00:24:47.351 "nvme_iov_md": false 00:24:47.351 }, 00:24:47.351 "driver_specific": { 00:24:47.351 "nvme": [ 00:24:47.351 { 00:24:47.351 "pci_address": "0000:d8:00.0", 00:24:47.351 "trid": { 00:24:47.351 "trtype": "PCIe", 00:24:47.351 "traddr": "0000:d8:00.0" 00:24:47.351 }, 00:24:47.351 "ctrlr_data": { 00:24:47.351 "cntlid": 0, 00:24:47.351 "vendor_id": "0x8086", 00:24:47.351 "model_number": "INTEL SSDPE2KX020T8", 00:24:47.351 "serial_number": "BTLJ125504VE2P0BGN", 00:24:47.351 "firmware_revision": "VDV10170", 00:24:47.351 "oacs": { 00:24:47.351 "security": 0, 00:24:47.351 "format": 1, 00:24:47.351 "firmware": 1, 00:24:47.351 "ns_manage": 1 00:24:47.351 }, 00:24:47.351 "multi_ctrlr": false, 00:24:47.351 "ana_reporting": false 00:24:47.351 }, 00:24:47.351 "vs": { 00:24:47.351 "nvme_version": "1.2" 00:24:47.351 }, 00:24:47.351 "ns_data": { 00:24:47.351 "id": 1, 00:24:47.351 "can_share": false 00:24:47.351 } 00:24:47.351 } 00:24:47.351 ], 00:24:47.351 "mp_policy": "active_passive" 00:24:47.351 } 00:24:47.351 } 00:24:47.351 ] 00:24:47.351 18:27:55 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:24:47.352 18:27:55 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:47.611 [2024-07-24 18:27:56.026022] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x202f950 PMD being used: compress_qat 00:24:48.548 b79afd86-839b-443d-a238-0bc97342adb8 00:24:48.548 18:27:57 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:48.872 be3f58d2-584d-4f3f-ab1f-48f304003df8 00:24:48.873 18:27:57 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:48.873 18:27:57 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:24:48.873 18:27:57 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:48.873 18:27:57 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:24:48.873 18:27:57 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:48.873 18:27:57 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:48.873 18:27:57 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:49.132 18:27:57 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:49.132 [ 00:24:49.132 { 00:24:49.132 "name": "be3f58d2-584d-4f3f-ab1f-48f304003df8", 00:24:49.132 "aliases": [ 00:24:49.132 "lvs0/lv0" 00:24:49.132 ], 00:24:49.132 "product_name": "Logical Volume", 00:24:49.132 "block_size": 512, 00:24:49.132 "num_blocks": 204800, 00:24:49.132 "uuid": "be3f58d2-584d-4f3f-ab1f-48f304003df8", 00:24:49.132 "assigned_rate_limits": { 00:24:49.132 "rw_ios_per_sec": 0, 00:24:49.132 "rw_mbytes_per_sec": 0, 00:24:49.132 "r_mbytes_per_sec": 0, 00:24:49.132 "w_mbytes_per_sec": 0 00:24:49.132 }, 00:24:49.132 "claimed": false, 00:24:49.132 "zoned": false, 00:24:49.132 "supported_io_types": { 00:24:49.132 "read": true, 00:24:49.132 "write": true, 00:24:49.132 "unmap": true, 00:24:49.132 "flush": false, 00:24:49.132 "reset": true, 00:24:49.132 "nvme_admin": false, 00:24:49.132 "nvme_io": false, 00:24:49.132 "nvme_io_md": false, 00:24:49.132 "write_zeroes": true, 00:24:49.132 "zcopy": false, 00:24:49.132 "get_zone_info": false, 00:24:49.132 "zone_management": false, 00:24:49.132 "zone_append": false, 00:24:49.132 "compare": false, 00:24:49.132 "compare_and_write": false, 00:24:49.132 "abort": false, 00:24:49.132 "seek_hole": true, 00:24:49.132 "seek_data": true, 00:24:49.132 "copy": false, 00:24:49.132 "nvme_iov_md": false 00:24:49.132 }, 00:24:49.132 "driver_specific": { 00:24:49.132 "lvol": { 00:24:49.132 "lvol_store_uuid": "b79afd86-839b-443d-a238-0bc97342adb8", 00:24:49.132 "base_bdev": "Nvme0n1", 00:24:49.132 "thin_provision": true, 00:24:49.132 "num_allocated_clusters": 0, 00:24:49.132 "snapshot": false, 00:24:49.132 "clone": false, 00:24:49.132 "esnap_clone": false 00:24:49.132 } 00:24:49.132 } 00:24:49.132 } 00:24:49.132 ] 00:24:49.132 18:27:57 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:24:49.132 18:27:57 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:24:49.132 18:27:57 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:24:49.391 [2024-07-24 18:27:57.765678] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:49.391 COMP_lvs0/lv0 00:24:49.391 18:27:57 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:49.391 18:27:57 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:24:49.391 18:27:57 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:49.391 18:27:57 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:24:49.391 18:27:57 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:49.391 18:27:57 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:49.391 18:27:57 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:49.391 18:27:57 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:49.651 [ 00:24:49.651 { 00:24:49.651 "name": "COMP_lvs0/lv0", 00:24:49.651 "aliases": [ 00:24:49.651 "4812f7e6-4442-50dd-9451-34b1b51f14ee" 00:24:49.651 ], 00:24:49.651 "product_name": "compress", 00:24:49.651 "block_size": 4096, 00:24:49.651 "num_blocks": 25088, 00:24:49.651 "uuid": "4812f7e6-4442-50dd-9451-34b1b51f14ee", 00:24:49.651 "assigned_rate_limits": { 00:24:49.651 "rw_ios_per_sec": 0, 00:24:49.651 "rw_mbytes_per_sec": 0, 00:24:49.651 "r_mbytes_per_sec": 0, 00:24:49.651 "w_mbytes_per_sec": 0 00:24:49.651 }, 00:24:49.651 "claimed": false, 00:24:49.651 "zoned": false, 00:24:49.651 "supported_io_types": { 00:24:49.651 "read": true, 00:24:49.651 "write": true, 00:24:49.651 "unmap": false, 00:24:49.651 "flush": false, 00:24:49.651 "reset": false, 00:24:49.651 "nvme_admin": false, 00:24:49.651 "nvme_io": false, 00:24:49.651 "nvme_io_md": false, 00:24:49.651 "write_zeroes": true, 00:24:49.651 "zcopy": false, 00:24:49.651 "get_zone_info": false, 00:24:49.651 "zone_management": false, 00:24:49.651 "zone_append": false, 00:24:49.651 "compare": false, 00:24:49.651 "compare_and_write": false, 00:24:49.651 "abort": false, 00:24:49.651 "seek_hole": false, 00:24:49.651 "seek_data": false, 00:24:49.651 "copy": false, 00:24:49.651 "nvme_iov_md": false 00:24:49.651 }, 00:24:49.651 "driver_specific": { 00:24:49.651 "compress": { 00:24:49.651 "name": "COMP_lvs0/lv0", 00:24:49.651 "base_bdev_name": "be3f58d2-584d-4f3f-ab1f-48f304003df8", 00:24:49.651 "pm_path": "/tmp/pmem/a18e64ba-6811-4081-a175-ca6fa85e5567" 00:24:49.651 } 00:24:49.651 } 00:24:49.651 } 00:24:49.651 ] 00:24:49.651 18:27:58 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:24:49.651 18:27:58 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:49.651 [2024-07-24 18:27:58.199511] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f8a681b15c0 PMD being used: compress_qat 00:24:49.651 [2024-07-24 18:27:58.201074] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x21f4e50 PMD being used: compress_qat 00:24:49.651 Running I/O for 3 seconds... 00:24:52.942 00:24:52.942 Latency(us) 00:24:52.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.942 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:24:52.942 Verification LBA range: start 0x0 length 0x3100 00:24:52.942 COMP_lvs0/lv0 : 3.01 4069.86 15.90 0.00 0.00 7821.00 173.67 13002.34 00:24:52.942 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:24:52.942 Verification LBA range: start 0x3100 length 0x3100 00:24:52.942 COMP_lvs0/lv0 : 3.01 4153.62 16.23 0.00 0.00 7663.89 165.48 13002.34 00:24:52.942 =================================================================================================================== 00:24:52.942 Total : 8223.48 32.12 0.00 0.00 7741.64 165.48 13002.34 00:24:52.942 0 00:24:52.942 18:28:01 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:24:52.942 18:28:01 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:52.942 18:28:01 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:53.201 18:28:01 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:53.201 18:28:01 compress_compdev -- compress/compress.sh@78 -- # killprocess 2326405 00:24:53.201 18:28:01 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 2326405 ']' 00:24:53.201 18:28:01 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 2326405 00:24:53.201 18:28:01 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:24:53.201 18:28:01 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:53.201 18:28:01 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2326405 00:24:53.201 18:28:01 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:53.201 18:28:01 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:53.201 18:28:01 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2326405' 00:24:53.201 killing process with pid 2326405 00:24:53.201 18:28:01 compress_compdev -- common/autotest_common.sh@969 -- # kill 2326405 00:24:53.201 Received shutdown signal, test time was about 3.000000 seconds 00:24:53.201 00:24:53.201 Latency(us) 00:24:53.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.201 =================================================================================================================== 00:24:53.201 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:53.201 18:28:01 compress_compdev -- common/autotest_common.sh@974 -- # wait 2326405 00:24:55.736 18:28:04 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:24:55.736 18:28:04 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:55.736 18:28:04 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=2328487 00:24:55.736 18:28:04 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:55.736 18:28:04 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:24:55.736 18:28:04 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 2328487 00:24:55.736 18:28:04 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 2328487 ']' 00:24:55.736 18:28:04 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.736 18:28:04 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:55.736 18:28:04 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.736 18:28:04 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:55.736 18:28:04 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:55.736 [2024-07-24 18:28:04.161300] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:24:55.736 [2024-07-24 18:28:04.161349] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2328487 ] 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:01.0 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:01.1 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:01.2 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:01.3 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:01.4 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:01.5 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:01.6 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:01.7 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:02.0 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:02.1 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:02.2 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:02.3 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:02.4 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:02.5 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:02.6 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b3:02.7 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:01.0 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:01.1 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:01.2 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:01.3 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:01.4 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:01.5 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:01.6 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:01.7 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:02.0 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:02.1 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:02.2 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:02.3 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:02.4 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:02.5 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:02.6 cannot be used 00:24:55.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:55.736 EAL: Requested device 0000:b5:02.7 cannot be used 00:24:55.736 [2024-07-24 18:28:04.253683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:55.736 [2024-07-24 18:28:04.329112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.736 [2024-07-24 18:28:04.329206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.736 [2024-07-24 18:28:04.329209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.303 [2024-07-24 18:28:04.846657] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:56.562 18:28:04 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:56.562 18:28:04 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:24:56.562 18:28:04 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:24:56.562 18:28:04 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:56.562 18:28:04 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:59.847 [2024-07-24 18:28:07.998090] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2218440 PMD being used: compress_qat 00:24:59.847 18:28:08 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:59.847 18:28:08 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:24:59.847 18:28:08 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:59.847 18:28:08 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:24:59.847 18:28:08 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:59.847 18:28:08 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:59.847 18:28:08 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:59.847 18:28:08 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:59.847 [ 00:24:59.847 { 00:24:59.847 "name": "Nvme0n1", 00:24:59.847 "aliases": [ 00:24:59.847 "82d155e3-1c40-43b9-a7dc-6f5ec1524d68" 00:24:59.847 ], 00:24:59.847 "product_name": "NVMe disk", 00:24:59.847 "block_size": 512, 00:24:59.847 "num_blocks": 3907029168, 00:24:59.847 "uuid": "82d155e3-1c40-43b9-a7dc-6f5ec1524d68", 00:24:59.847 "assigned_rate_limits": { 00:24:59.847 "rw_ios_per_sec": 0, 00:24:59.847 "rw_mbytes_per_sec": 0, 00:24:59.847 "r_mbytes_per_sec": 0, 00:24:59.847 "w_mbytes_per_sec": 0 00:24:59.847 }, 00:24:59.847 "claimed": false, 00:24:59.847 "zoned": false, 00:24:59.847 "supported_io_types": { 00:24:59.847 "read": true, 00:24:59.847 "write": true, 00:24:59.847 "unmap": true, 00:24:59.847 "flush": true, 00:24:59.847 "reset": true, 00:24:59.847 "nvme_admin": true, 00:24:59.847 "nvme_io": true, 00:24:59.847 "nvme_io_md": false, 00:24:59.847 "write_zeroes": true, 00:24:59.847 "zcopy": false, 00:24:59.847 "get_zone_info": false, 00:24:59.847 "zone_management": false, 00:24:59.848 "zone_append": false, 00:24:59.848 "compare": false, 00:24:59.848 "compare_and_write": false, 00:24:59.848 "abort": true, 00:24:59.848 "seek_hole": false, 00:24:59.848 "seek_data": false, 00:24:59.848 "copy": false, 00:24:59.848 "nvme_iov_md": false 00:24:59.848 }, 00:24:59.848 "driver_specific": { 00:24:59.848 "nvme": [ 00:24:59.848 { 00:24:59.848 "pci_address": "0000:d8:00.0", 00:24:59.848 "trid": { 00:24:59.848 "trtype": "PCIe", 00:24:59.848 "traddr": "0000:d8:00.0" 00:24:59.848 }, 00:24:59.848 "ctrlr_data": { 00:24:59.848 "cntlid": 0, 00:24:59.848 "vendor_id": "0x8086", 00:24:59.848 "model_number": "INTEL SSDPE2KX020T8", 00:24:59.848 "serial_number": "BTLJ125504VE2P0BGN", 00:24:59.848 "firmware_revision": "VDV10170", 00:24:59.848 "oacs": { 00:24:59.848 "security": 0, 00:24:59.848 "format": 1, 00:24:59.848 "firmware": 1, 00:24:59.848 "ns_manage": 1 00:24:59.848 }, 00:24:59.848 "multi_ctrlr": false, 00:24:59.848 "ana_reporting": false 00:24:59.848 }, 00:24:59.848 "vs": { 00:24:59.848 "nvme_version": "1.2" 00:24:59.848 }, 00:24:59.848 "ns_data": { 00:24:59.848 "id": 1, 00:24:59.848 "can_share": false 00:24:59.848 } 00:24:59.848 } 00:24:59.848 ], 00:24:59.848 "mp_policy": "active_passive" 00:24:59.848 } 00:24:59.848 } 00:24:59.848 ] 00:24:59.848 18:28:08 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:24:59.848 18:28:08 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:00.107 [2024-07-24 18:28:08.554899] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x221a970 PMD being used: compress_qat 00:25:01.045 bfa53496-e251-44a6-a5b4-984ab181fc28 00:25:01.045 18:28:09 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:01.304 12254655-1952-4040-bcb0-32694dc054ca 00:25:01.304 18:28:09 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:01.304 18:28:09 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:25:01.304 18:28:09 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:01.304 18:28:09 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:25:01.304 18:28:09 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:01.304 18:28:09 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:01.304 18:28:09 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:01.304 18:28:09 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:01.564 [ 00:25:01.564 { 00:25:01.564 "name": "12254655-1952-4040-bcb0-32694dc054ca", 00:25:01.564 "aliases": [ 00:25:01.564 "lvs0/lv0" 00:25:01.564 ], 00:25:01.564 "product_name": "Logical Volume", 00:25:01.564 "block_size": 512, 00:25:01.564 "num_blocks": 204800, 00:25:01.564 "uuid": "12254655-1952-4040-bcb0-32694dc054ca", 00:25:01.564 "assigned_rate_limits": { 00:25:01.564 "rw_ios_per_sec": 0, 00:25:01.564 "rw_mbytes_per_sec": 0, 00:25:01.564 "r_mbytes_per_sec": 0, 00:25:01.564 "w_mbytes_per_sec": 0 00:25:01.564 }, 00:25:01.564 "claimed": false, 00:25:01.564 "zoned": false, 00:25:01.564 "supported_io_types": { 00:25:01.564 "read": true, 00:25:01.564 "write": true, 00:25:01.564 "unmap": true, 00:25:01.564 "flush": false, 00:25:01.564 "reset": true, 00:25:01.564 "nvme_admin": false, 00:25:01.564 "nvme_io": false, 00:25:01.564 "nvme_io_md": false, 00:25:01.564 "write_zeroes": true, 00:25:01.564 "zcopy": false, 00:25:01.564 "get_zone_info": false, 00:25:01.564 "zone_management": false, 00:25:01.564 "zone_append": false, 00:25:01.564 "compare": false, 00:25:01.564 "compare_and_write": false, 00:25:01.564 "abort": false, 00:25:01.564 "seek_hole": true, 00:25:01.564 "seek_data": true, 00:25:01.564 "copy": false, 00:25:01.564 "nvme_iov_md": false 00:25:01.564 }, 00:25:01.564 "driver_specific": { 00:25:01.564 "lvol": { 00:25:01.564 "lvol_store_uuid": "bfa53496-e251-44a6-a5b4-984ab181fc28", 00:25:01.564 "base_bdev": "Nvme0n1", 00:25:01.564 "thin_provision": true, 00:25:01.564 "num_allocated_clusters": 0, 00:25:01.564 "snapshot": false, 00:25:01.564 "clone": false, 00:25:01.564 "esnap_clone": false 00:25:01.564 } 00:25:01.564 } 00:25:01.564 } 00:25:01.564 ] 00:25:01.564 18:28:10 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:25:01.564 18:28:10 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:25:01.564 18:28:10 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:25:01.823 [2024-07-24 18:28:10.223255] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:01.824 COMP_lvs0/lv0 00:25:01.824 18:28:10 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:01.824 18:28:10 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:25:01.824 18:28:10 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:01.824 18:28:10 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:25:01.824 18:28:10 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:01.824 18:28:10 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:01.824 18:28:10 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:01.824 18:28:10 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:02.083 [ 00:25:02.083 { 00:25:02.083 "name": "COMP_lvs0/lv0", 00:25:02.083 "aliases": [ 00:25:02.083 "0843b990-aae5-5087-8916-3759ea35029d" 00:25:02.083 ], 00:25:02.083 "product_name": "compress", 00:25:02.083 "block_size": 512, 00:25:02.083 "num_blocks": 200704, 00:25:02.083 "uuid": "0843b990-aae5-5087-8916-3759ea35029d", 00:25:02.083 "assigned_rate_limits": { 00:25:02.083 "rw_ios_per_sec": 0, 00:25:02.083 "rw_mbytes_per_sec": 0, 00:25:02.083 "r_mbytes_per_sec": 0, 00:25:02.083 "w_mbytes_per_sec": 0 00:25:02.083 }, 00:25:02.083 "claimed": false, 00:25:02.083 "zoned": false, 00:25:02.083 "supported_io_types": { 00:25:02.083 "read": true, 00:25:02.083 "write": true, 00:25:02.083 "unmap": false, 00:25:02.083 "flush": false, 00:25:02.083 "reset": false, 00:25:02.083 "nvme_admin": false, 00:25:02.083 "nvme_io": false, 00:25:02.083 "nvme_io_md": false, 00:25:02.083 "write_zeroes": true, 00:25:02.083 "zcopy": false, 00:25:02.083 "get_zone_info": false, 00:25:02.083 "zone_management": false, 00:25:02.083 "zone_append": false, 00:25:02.083 "compare": false, 00:25:02.083 "compare_and_write": false, 00:25:02.083 "abort": false, 00:25:02.083 "seek_hole": false, 00:25:02.083 "seek_data": false, 00:25:02.083 "copy": false, 00:25:02.083 "nvme_iov_md": false 00:25:02.083 }, 00:25:02.083 "driver_specific": { 00:25:02.083 "compress": { 00:25:02.083 "name": "COMP_lvs0/lv0", 00:25:02.083 "base_bdev_name": "12254655-1952-4040-bcb0-32694dc054ca", 00:25:02.083 "pm_path": "/tmp/pmem/de43f4a5-3909-476d-a256-497c6a84414b" 00:25:02.083 } 00:25:02.083 } 00:25:02.083 } 00:25:02.083 ] 00:25:02.083 18:28:10 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:25:02.083 18:28:10 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:02.083 [2024-07-24 18:28:10.644105] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f77a41b1350 PMD being used: compress_qat 00:25:02.083 I/O targets: 00:25:02.083 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:25:02.083 00:25:02.083 00:25:02.083 CUnit - A unit testing framework for C - Version 2.1-3 00:25:02.083 http://cunit.sourceforge.net/ 00:25:02.083 00:25:02.083 00:25:02.083 Suite: bdevio tests on: COMP_lvs0/lv0 00:25:02.083 Test: blockdev write read block ...passed 00:25:02.083 Test: blockdev write zeroes read block ...passed 00:25:02.083 Test: blockdev write zeroes read no split ...passed 00:25:02.343 Test: blockdev write zeroes read split ...passed 00:25:02.343 Test: blockdev write zeroes read split partial ...passed 00:25:02.343 Test: blockdev reset ...[2024-07-24 18:28:10.699013] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:25:02.343 passed 00:25:02.343 Test: blockdev write read 8 blocks ...passed 00:25:02.343 Test: blockdev write read size > 128k ...passed 00:25:02.343 Test: blockdev write read invalid size ...passed 00:25:02.343 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:02.343 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:02.343 Test: blockdev write read max offset ...passed 00:25:02.343 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:02.343 Test: blockdev writev readv 8 blocks ...passed 00:25:02.343 Test: blockdev writev readv 30 x 1block ...passed 00:25:02.343 Test: blockdev writev readv block ...passed 00:25:02.343 Test: blockdev writev readv size > 128k ...passed 00:25:02.343 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:02.343 Test: blockdev comparev and writev ...passed 00:25:02.343 Test: blockdev nvme passthru rw ...passed 00:25:02.343 Test: blockdev nvme passthru vendor specific ...passed 00:25:02.343 Test: blockdev nvme admin passthru ...passed 00:25:02.343 Test: blockdev copy ...passed 00:25:02.343 00:25:02.343 Run Summary: Type Total Ran Passed Failed Inactive 00:25:02.343 suites 1 1 n/a 0 0 00:25:02.343 tests 23 23 23 0 0 00:25:02.343 asserts 130 130 130 0 n/a 00:25:02.343 00:25:02.343 Elapsed time = 0.156 seconds 00:25:02.343 0 00:25:02.343 18:28:10 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:25:02.343 18:28:10 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:02.343 18:28:10 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:02.602 18:28:11 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:25:02.602 18:28:11 compress_compdev -- compress/compress.sh@62 -- # killprocess 2328487 00:25:02.602 18:28:11 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 2328487 ']' 00:25:02.603 18:28:11 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 2328487 00:25:02.603 18:28:11 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:25:02.603 18:28:11 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:02.603 18:28:11 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2328487 00:25:02.603 18:28:11 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:02.603 18:28:11 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:02.603 18:28:11 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2328487' 00:25:02.603 killing process with pid 2328487 00:25:02.603 18:28:11 compress_compdev -- common/autotest_common.sh@969 -- # kill 2328487 00:25:02.603 18:28:11 compress_compdev -- common/autotest_common.sh@974 -- # wait 2328487 00:25:05.138 18:28:13 compress_compdev -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:25:05.138 18:28:13 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:25:05.138 00:25:05.138 real 0m46.926s 00:25:05.138 user 1m44.293s 00:25:05.138 sys 0m4.508s 00:25:05.138 18:28:13 compress_compdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:05.138 18:28:13 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:25:05.138 ************************************ 00:25:05.139 END TEST compress_compdev 00:25:05.139 ************************************ 00:25:05.139 18:28:13 -- spdk/autotest.sh@353 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:25:05.139 18:28:13 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:05.139 18:28:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:05.139 18:28:13 -- common/autotest_common.sh@10 -- # set +x 00:25:05.139 ************************************ 00:25:05.139 START TEST compress_isal 00:25:05.139 ************************************ 00:25:05.139 18:28:13 compress_isal -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:25:05.398 * Looking for test storage... 00:25:05.398 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:25:05.398 18:28:13 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.398 18:28:13 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:25:05.398 18:28:13 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.398 18:28:13 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.399 18:28:13 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.399 18:28:13 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.399 18:28:13 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.399 18:28:13 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.399 18:28:13 compress_isal -- paths/export.sh@5 -- # export PATH 00:25:05.399 18:28:13 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.399 18:28:13 compress_isal -- nvmf/common.sh@47 -- # : 0 00:25:05.399 18:28:13 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:05.399 18:28:13 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:05.399 18:28:13 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.399 18:28:13 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.399 18:28:13 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.399 18:28:13 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:05.399 18:28:13 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:05.399 18:28:13 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:05.399 18:28:13 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:05.399 18:28:13 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:25:05.399 18:28:13 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:25:05.399 18:28:13 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:25:05.399 18:28:13 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:25:05.399 18:28:13 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2330235 00:25:05.399 18:28:13 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:05.399 18:28:13 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2330235 00:25:05.399 18:28:13 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 2330235 ']' 00:25:05.399 18:28:13 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:25:05.399 18:28:13 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.399 18:28:13 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.399 18:28:13 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.399 18:28:13 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.399 18:28:13 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:05.399 [2024-07-24 18:28:13.880690] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:25:05.399 [2024-07-24 18:28:13.880739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330235 ] 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:01.0 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:01.1 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:01.2 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:01.3 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:01.4 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:01.5 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:01.6 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:01.7 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:02.0 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:02.1 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:02.2 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:02.3 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:02.4 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:02.5 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:02.6 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b3:02.7 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:01.0 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:01.1 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:01.2 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:01.3 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:01.4 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:01.5 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:01.6 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:01.7 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:02.0 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:02.1 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:02.2 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:02.3 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:02.4 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:02.5 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:02.6 cannot be used 00:25:05.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:05.399 EAL: Requested device 0000:b5:02.7 cannot be used 00:25:05.399 [2024-07-24 18:28:13.972992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:05.659 [2024-07-24 18:28:14.048007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.659 [2024-07-24 18:28:14.048011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.227 18:28:14 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.227 18:28:14 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:25:06.227 18:28:14 compress_isal -- compress/compress.sh@74 -- # create_vols 00:25:06.227 18:28:14 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:06.227 18:28:14 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:09.518 18:28:17 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:09.518 18:28:17 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:25:09.518 18:28:17 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:09.518 18:28:17 compress_isal -- common/autotest_common.sh@901 -- # local i 00:25:09.518 18:28:17 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:09.518 18:28:17 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:09.518 18:28:17 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:09.518 18:28:17 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:09.518 [ 00:25:09.518 { 00:25:09.518 "name": "Nvme0n1", 00:25:09.518 "aliases": [ 00:25:09.518 "33d6bdf4-b026-4301-a81e-8443109bc406" 00:25:09.518 ], 00:25:09.518 "product_name": "NVMe disk", 00:25:09.518 "block_size": 512, 00:25:09.518 "num_blocks": 3907029168, 00:25:09.518 "uuid": "33d6bdf4-b026-4301-a81e-8443109bc406", 00:25:09.519 "assigned_rate_limits": { 00:25:09.519 "rw_ios_per_sec": 0, 00:25:09.519 "rw_mbytes_per_sec": 0, 00:25:09.519 "r_mbytes_per_sec": 0, 00:25:09.519 "w_mbytes_per_sec": 0 00:25:09.519 }, 00:25:09.519 "claimed": false, 00:25:09.519 "zoned": false, 00:25:09.519 "supported_io_types": { 00:25:09.519 "read": true, 00:25:09.519 "write": true, 00:25:09.519 "unmap": true, 00:25:09.519 "flush": true, 00:25:09.519 "reset": true, 00:25:09.519 "nvme_admin": true, 00:25:09.519 "nvme_io": true, 00:25:09.519 "nvme_io_md": false, 00:25:09.519 "write_zeroes": true, 00:25:09.519 "zcopy": false, 00:25:09.519 "get_zone_info": false, 00:25:09.519 "zone_management": false, 00:25:09.519 "zone_append": false, 00:25:09.519 "compare": false, 00:25:09.519 "compare_and_write": false, 00:25:09.519 "abort": true, 00:25:09.519 "seek_hole": false, 00:25:09.519 "seek_data": false, 00:25:09.519 "copy": false, 00:25:09.519 "nvme_iov_md": false 00:25:09.519 }, 00:25:09.519 "driver_specific": { 00:25:09.519 "nvme": [ 00:25:09.519 { 00:25:09.519 "pci_address": "0000:d8:00.0", 00:25:09.519 "trid": { 00:25:09.519 "trtype": "PCIe", 00:25:09.519 "traddr": "0000:d8:00.0" 00:25:09.519 }, 00:25:09.519 "ctrlr_data": { 00:25:09.519 "cntlid": 0, 00:25:09.519 "vendor_id": "0x8086", 00:25:09.519 "model_number": "INTEL SSDPE2KX020T8", 00:25:09.519 "serial_number": "BTLJ125504VE2P0BGN", 00:25:09.519 "firmware_revision": "VDV10170", 00:25:09.519 "oacs": { 00:25:09.519 "security": 0, 00:25:09.519 "format": 1, 00:25:09.519 "firmware": 1, 00:25:09.519 "ns_manage": 1 00:25:09.519 }, 00:25:09.519 "multi_ctrlr": false, 00:25:09.519 "ana_reporting": false 00:25:09.519 }, 00:25:09.519 "vs": { 00:25:09.519 "nvme_version": "1.2" 00:25:09.519 }, 00:25:09.519 "ns_data": { 00:25:09.519 "id": 1, 00:25:09.519 "can_share": false 00:25:09.519 } 00:25:09.519 } 00:25:09.519 ], 00:25:09.519 "mp_policy": "active_passive" 00:25:09.519 } 00:25:09.519 } 00:25:09.519 ] 00:25:09.519 18:28:18 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:25:09.519 18:28:18 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:10.970 47602dbc-519d-4d3d-889e-8666427d3827 00:25:10.970 18:28:19 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:10.970 647f4e59-0e4b-4869-ba8d-2dc7db486487 00:25:10.970 18:28:19 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:10.970 18:28:19 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:25:10.970 18:28:19 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:10.970 18:28:19 compress_isal -- common/autotest_common.sh@901 -- # local i 00:25:10.970 18:28:19 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:10.970 18:28:19 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:10.970 18:28:19 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:11.230 18:28:19 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:11.230 [ 00:25:11.230 { 00:25:11.230 "name": "647f4e59-0e4b-4869-ba8d-2dc7db486487", 00:25:11.230 "aliases": [ 00:25:11.230 "lvs0/lv0" 00:25:11.230 ], 00:25:11.230 "product_name": "Logical Volume", 00:25:11.230 "block_size": 512, 00:25:11.230 "num_blocks": 204800, 00:25:11.230 "uuid": "647f4e59-0e4b-4869-ba8d-2dc7db486487", 00:25:11.230 "assigned_rate_limits": { 00:25:11.230 "rw_ios_per_sec": 0, 00:25:11.230 "rw_mbytes_per_sec": 0, 00:25:11.230 "r_mbytes_per_sec": 0, 00:25:11.230 "w_mbytes_per_sec": 0 00:25:11.230 }, 00:25:11.230 "claimed": false, 00:25:11.230 "zoned": false, 00:25:11.230 "supported_io_types": { 00:25:11.230 "read": true, 00:25:11.230 "write": true, 00:25:11.230 "unmap": true, 00:25:11.230 "flush": false, 00:25:11.230 "reset": true, 00:25:11.230 "nvme_admin": false, 00:25:11.230 "nvme_io": false, 00:25:11.230 "nvme_io_md": false, 00:25:11.230 "write_zeroes": true, 00:25:11.230 "zcopy": false, 00:25:11.230 "get_zone_info": false, 00:25:11.230 "zone_management": false, 00:25:11.230 "zone_append": false, 00:25:11.230 "compare": false, 00:25:11.230 "compare_and_write": false, 00:25:11.230 "abort": false, 00:25:11.230 "seek_hole": true, 00:25:11.230 "seek_data": true, 00:25:11.230 "copy": false, 00:25:11.230 "nvme_iov_md": false 00:25:11.230 }, 00:25:11.230 "driver_specific": { 00:25:11.230 "lvol": { 00:25:11.230 "lvol_store_uuid": "47602dbc-519d-4d3d-889e-8666427d3827", 00:25:11.230 "base_bdev": "Nvme0n1", 00:25:11.230 "thin_provision": true, 00:25:11.230 "num_allocated_clusters": 0, 00:25:11.230 "snapshot": false, 00:25:11.230 "clone": false, 00:25:11.230 "esnap_clone": false 00:25:11.230 } 00:25:11.230 } 00:25:11.230 } 00:25:11.230 ] 00:25:11.230 18:28:19 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:25:11.230 18:28:19 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:25:11.230 18:28:19 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:25:11.489 [2024-07-24 18:28:19.937150] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:11.489 COMP_lvs0/lv0 00:25:11.489 18:28:19 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:11.489 18:28:19 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:25:11.489 18:28:19 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:11.489 18:28:19 compress_isal -- common/autotest_common.sh@901 -- # local i 00:25:11.489 18:28:19 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:11.489 18:28:19 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:11.489 18:28:19 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:11.748 18:28:20 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:11.748 [ 00:25:11.748 { 00:25:11.748 "name": "COMP_lvs0/lv0", 00:25:11.748 "aliases": [ 00:25:11.748 "00f29c13-124f-5ca8-aea2-b832b697598b" 00:25:11.748 ], 00:25:11.748 "product_name": "compress", 00:25:11.748 "block_size": 512, 00:25:11.748 "num_blocks": 200704, 00:25:11.748 "uuid": "00f29c13-124f-5ca8-aea2-b832b697598b", 00:25:11.748 "assigned_rate_limits": { 00:25:11.748 "rw_ios_per_sec": 0, 00:25:11.748 "rw_mbytes_per_sec": 0, 00:25:11.748 "r_mbytes_per_sec": 0, 00:25:11.748 "w_mbytes_per_sec": 0 00:25:11.748 }, 00:25:11.748 "claimed": false, 00:25:11.748 "zoned": false, 00:25:11.748 "supported_io_types": { 00:25:11.748 "read": true, 00:25:11.749 "write": true, 00:25:11.749 "unmap": false, 00:25:11.749 "flush": false, 00:25:11.749 "reset": false, 00:25:11.749 "nvme_admin": false, 00:25:11.749 "nvme_io": false, 00:25:11.749 "nvme_io_md": false, 00:25:11.749 "write_zeroes": true, 00:25:11.749 "zcopy": false, 00:25:11.749 "get_zone_info": false, 00:25:11.749 "zone_management": false, 00:25:11.749 "zone_append": false, 00:25:11.749 "compare": false, 00:25:11.749 "compare_and_write": false, 00:25:11.749 "abort": false, 00:25:11.749 "seek_hole": false, 00:25:11.749 "seek_data": false, 00:25:11.749 "copy": false, 00:25:11.749 "nvme_iov_md": false 00:25:11.749 }, 00:25:11.749 "driver_specific": { 00:25:11.749 "compress": { 00:25:11.749 "name": "COMP_lvs0/lv0", 00:25:11.749 "base_bdev_name": "647f4e59-0e4b-4869-ba8d-2dc7db486487", 00:25:11.749 "pm_path": "/tmp/pmem/eff2990e-7e18-49a6-bb89-fb88c19f94c9" 00:25:11.749 } 00:25:11.749 } 00:25:11.749 } 00:25:11.749 ] 00:25:11.749 18:28:20 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:25:11.749 18:28:20 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:12.008 Running I/O for 3 seconds... 00:25:15.301 00:25:15.301 Latency(us) 00:25:15.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.301 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:25:15.301 Verification LBA range: start 0x0 length 0x3100 00:25:15.301 COMP_lvs0/lv0 : 3.01 3578.02 13.98 0.00 0.00 8900.98 56.12 14260.63 00:25:15.301 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:25:15.301 Verification LBA range: start 0x3100 length 0x3100 00:25:15.301 COMP_lvs0/lv0 : 3.01 3592.38 14.03 0.00 0.00 8869.18 54.89 14155.78 00:25:15.301 =================================================================================================================== 00:25:15.301 Total : 7170.40 28.01 0.00 0.00 8885.05 54.89 14260.63 00:25:15.301 0 00:25:15.301 18:28:23 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:25:15.301 18:28:23 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:15.301 18:28:23 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:15.301 18:28:23 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:15.301 18:28:23 compress_isal -- compress/compress.sh@78 -- # killprocess 2330235 00:25:15.301 18:28:23 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 2330235 ']' 00:25:15.301 18:28:23 compress_isal -- common/autotest_common.sh@954 -- # kill -0 2330235 00:25:15.301 18:28:23 compress_isal -- common/autotest_common.sh@955 -- # uname 00:25:15.301 18:28:23 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.301 18:28:23 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2330235 00:25:15.301 18:28:23 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:15.301 18:28:23 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:15.301 18:28:23 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2330235' 00:25:15.301 killing process with pid 2330235 00:25:15.301 18:28:23 compress_isal -- common/autotest_common.sh@969 -- # kill 2330235 00:25:15.301 Received shutdown signal, test time was about 3.000000 seconds 00:25:15.301 00:25:15.301 Latency(us) 00:25:15.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.301 =================================================================================================================== 00:25:15.301 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.301 18:28:23 compress_isal -- common/autotest_common.sh@974 -- # wait 2330235 00:25:17.837 18:28:26 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:25:17.837 18:28:26 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:25:17.837 18:28:26 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2332249 00:25:17.837 18:28:26 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:17.837 18:28:26 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:25:17.837 18:28:26 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2332249 00:25:17.837 18:28:26 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 2332249 ']' 00:25:17.837 18:28:26 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.837 18:28:26 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:17.837 18:28:26 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.837 18:28:26 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:17.837 18:28:26 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:17.837 [2024-07-24 18:28:26.313608] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:25:17.837 [2024-07-24 18:28:26.313667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332249 ] 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:01.0 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:01.1 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:01.2 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:01.3 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:01.4 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:01.5 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:01.6 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:01.7 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:02.0 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:02.1 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:02.2 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:02.3 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:02.4 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:02.5 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:02.6 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b3:02.7 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:01.0 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:01.1 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:01.2 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:01.3 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:01.4 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:01.5 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:01.6 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:01.7 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:02.0 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:02.1 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:02.2 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:02.3 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:02.4 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:02.5 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:02.6 cannot be used 00:25:17.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:17.837 EAL: Requested device 0000:b5:02.7 cannot be used 00:25:17.837 [2024-07-24 18:28:26.406069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:18.096 [2024-07-24 18:28:26.481073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.096 [2024-07-24 18:28:26.481077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.664 18:28:27 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:18.664 18:28:27 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:25:18.664 18:28:27 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:25:18.664 18:28:27 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:18.664 18:28:27 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:21.953 18:28:30 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:21.953 18:28:30 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:25:21.953 18:28:30 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:21.953 18:28:30 compress_isal -- common/autotest_common.sh@901 -- # local i 00:25:21.953 18:28:30 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:21.953 18:28:30 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:21.953 18:28:30 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:21.953 18:28:30 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:21.953 [ 00:25:21.953 { 00:25:21.953 "name": "Nvme0n1", 00:25:21.953 "aliases": [ 00:25:21.953 "300ab835-9340-4fa1-bf69-a364377d703d" 00:25:21.953 ], 00:25:21.953 "product_name": "NVMe disk", 00:25:21.953 "block_size": 512, 00:25:21.953 "num_blocks": 3907029168, 00:25:21.953 "uuid": "300ab835-9340-4fa1-bf69-a364377d703d", 00:25:21.953 "assigned_rate_limits": { 00:25:21.953 "rw_ios_per_sec": 0, 00:25:21.953 "rw_mbytes_per_sec": 0, 00:25:21.953 "r_mbytes_per_sec": 0, 00:25:21.953 "w_mbytes_per_sec": 0 00:25:21.953 }, 00:25:21.953 "claimed": false, 00:25:21.953 "zoned": false, 00:25:21.953 "supported_io_types": { 00:25:21.953 "read": true, 00:25:21.953 "write": true, 00:25:21.953 "unmap": true, 00:25:21.953 "flush": true, 00:25:21.953 "reset": true, 00:25:21.953 "nvme_admin": true, 00:25:21.953 "nvme_io": true, 00:25:21.953 "nvme_io_md": false, 00:25:21.953 "write_zeroes": true, 00:25:21.953 "zcopy": false, 00:25:21.953 "get_zone_info": false, 00:25:21.953 "zone_management": false, 00:25:21.953 "zone_append": false, 00:25:21.953 "compare": false, 00:25:21.953 "compare_and_write": false, 00:25:21.953 "abort": true, 00:25:21.953 "seek_hole": false, 00:25:21.953 "seek_data": false, 00:25:21.953 "copy": false, 00:25:21.953 "nvme_iov_md": false 00:25:21.953 }, 00:25:21.953 "driver_specific": { 00:25:21.953 "nvme": [ 00:25:21.953 { 00:25:21.953 "pci_address": "0000:d8:00.0", 00:25:21.953 "trid": { 00:25:21.953 "trtype": "PCIe", 00:25:21.953 "traddr": "0000:d8:00.0" 00:25:21.953 }, 00:25:21.953 "ctrlr_data": { 00:25:21.953 "cntlid": 0, 00:25:21.953 "vendor_id": "0x8086", 00:25:21.953 "model_number": "INTEL SSDPE2KX020T8", 00:25:21.953 "serial_number": "BTLJ125504VE2P0BGN", 00:25:21.953 "firmware_revision": "VDV10170", 00:25:21.953 "oacs": { 00:25:21.953 "security": 0, 00:25:21.953 "format": 1, 00:25:21.953 "firmware": 1, 00:25:21.953 "ns_manage": 1 00:25:21.953 }, 00:25:21.953 "multi_ctrlr": false, 00:25:21.953 "ana_reporting": false 00:25:21.953 }, 00:25:21.953 "vs": { 00:25:21.953 "nvme_version": "1.2" 00:25:21.953 }, 00:25:21.953 "ns_data": { 00:25:21.953 "id": 1, 00:25:21.953 "can_share": false 00:25:21.953 } 00:25:21.953 } 00:25:21.953 ], 00:25:21.953 "mp_policy": "active_passive" 00:25:21.953 } 00:25:21.953 } 00:25:21.953 ] 00:25:21.953 18:28:30 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:25:21.953 18:28:30 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:23.328 54af9991-c9c2-41a7-9aa3-76f830491b5f 00:25:23.328 18:28:31 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:23.328 8c20f8d6-9c06-4dce-bc79-29c5122d7175 00:25:23.328 18:28:31 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:23.328 18:28:31 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:25:23.328 18:28:31 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:23.328 18:28:31 compress_isal -- common/autotest_common.sh@901 -- # local i 00:25:23.328 18:28:31 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:23.328 18:28:31 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:23.328 18:28:31 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:23.586 18:28:31 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:23.586 [ 00:25:23.586 { 00:25:23.586 "name": "8c20f8d6-9c06-4dce-bc79-29c5122d7175", 00:25:23.586 "aliases": [ 00:25:23.586 "lvs0/lv0" 00:25:23.586 ], 00:25:23.586 "product_name": "Logical Volume", 00:25:23.586 "block_size": 512, 00:25:23.586 "num_blocks": 204800, 00:25:23.586 "uuid": "8c20f8d6-9c06-4dce-bc79-29c5122d7175", 00:25:23.586 "assigned_rate_limits": { 00:25:23.586 "rw_ios_per_sec": 0, 00:25:23.586 "rw_mbytes_per_sec": 0, 00:25:23.586 "r_mbytes_per_sec": 0, 00:25:23.586 "w_mbytes_per_sec": 0 00:25:23.586 }, 00:25:23.586 "claimed": false, 00:25:23.586 "zoned": false, 00:25:23.586 "supported_io_types": { 00:25:23.586 "read": true, 00:25:23.586 "write": true, 00:25:23.586 "unmap": true, 00:25:23.586 "flush": false, 00:25:23.586 "reset": true, 00:25:23.586 "nvme_admin": false, 00:25:23.586 "nvme_io": false, 00:25:23.586 "nvme_io_md": false, 00:25:23.586 "write_zeroes": true, 00:25:23.586 "zcopy": false, 00:25:23.586 "get_zone_info": false, 00:25:23.586 "zone_management": false, 00:25:23.586 "zone_append": false, 00:25:23.586 "compare": false, 00:25:23.586 "compare_and_write": false, 00:25:23.586 "abort": false, 00:25:23.586 "seek_hole": true, 00:25:23.586 "seek_data": true, 00:25:23.586 "copy": false, 00:25:23.586 "nvme_iov_md": false 00:25:23.586 }, 00:25:23.586 "driver_specific": { 00:25:23.586 "lvol": { 00:25:23.586 "lvol_store_uuid": "54af9991-c9c2-41a7-9aa3-76f830491b5f", 00:25:23.586 "base_bdev": "Nvme0n1", 00:25:23.586 "thin_provision": true, 00:25:23.586 "num_allocated_clusters": 0, 00:25:23.586 "snapshot": false, 00:25:23.586 "clone": false, 00:25:23.586 "esnap_clone": false 00:25:23.586 } 00:25:23.586 } 00:25:23.586 } 00:25:23.586 ] 00:25:23.586 18:28:32 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:25:23.586 18:28:32 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:25:23.586 18:28:32 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:25:23.843 [2024-07-24 18:28:32.291400] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:23.843 COMP_lvs0/lv0 00:25:23.843 18:28:32 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:23.843 18:28:32 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:25:23.843 18:28:32 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:23.843 18:28:32 compress_isal -- common/autotest_common.sh@901 -- # local i 00:25:23.843 18:28:32 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:23.843 18:28:32 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:23.843 18:28:32 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:24.101 18:28:32 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:24.101 [ 00:25:24.101 { 00:25:24.101 "name": "COMP_lvs0/lv0", 00:25:24.101 "aliases": [ 00:25:24.101 "ff2d3aab-28a4-5218-9433-54784ccfc27d" 00:25:24.101 ], 00:25:24.101 "product_name": "compress", 00:25:24.101 "block_size": 512, 00:25:24.101 "num_blocks": 200704, 00:25:24.101 "uuid": "ff2d3aab-28a4-5218-9433-54784ccfc27d", 00:25:24.101 "assigned_rate_limits": { 00:25:24.101 "rw_ios_per_sec": 0, 00:25:24.101 "rw_mbytes_per_sec": 0, 00:25:24.101 "r_mbytes_per_sec": 0, 00:25:24.101 "w_mbytes_per_sec": 0 00:25:24.101 }, 00:25:24.101 "claimed": false, 00:25:24.101 "zoned": false, 00:25:24.101 "supported_io_types": { 00:25:24.101 "read": true, 00:25:24.101 "write": true, 00:25:24.101 "unmap": false, 00:25:24.101 "flush": false, 00:25:24.101 "reset": false, 00:25:24.101 "nvme_admin": false, 00:25:24.101 "nvme_io": false, 00:25:24.101 "nvme_io_md": false, 00:25:24.101 "write_zeroes": true, 00:25:24.101 "zcopy": false, 00:25:24.101 "get_zone_info": false, 00:25:24.101 "zone_management": false, 00:25:24.101 "zone_append": false, 00:25:24.101 "compare": false, 00:25:24.101 "compare_and_write": false, 00:25:24.101 "abort": false, 00:25:24.101 "seek_hole": false, 00:25:24.101 "seek_data": false, 00:25:24.101 "copy": false, 00:25:24.101 "nvme_iov_md": false 00:25:24.101 }, 00:25:24.101 "driver_specific": { 00:25:24.101 "compress": { 00:25:24.101 "name": "COMP_lvs0/lv0", 00:25:24.101 "base_bdev_name": "8c20f8d6-9c06-4dce-bc79-29c5122d7175", 00:25:24.101 "pm_path": "/tmp/pmem/65327e8f-9750-4605-b8e9-e1dd5627ec82" 00:25:24.101 } 00:25:24.101 } 00:25:24.101 } 00:25:24.101 ] 00:25:24.101 18:28:32 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:25:24.101 18:28:32 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:24.359 Running I/O for 3 seconds... 00:25:27.648 00:25:27.648 Latency(us) 00:25:27.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.648 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:25:27.648 Verification LBA range: start 0x0 length 0x3100 00:25:27.648 COMP_lvs0/lv0 : 3.00 3564.66 13.92 0.00 0.00 8939.89 54.48 16252.93 00:25:27.648 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:25:27.648 Verification LBA range: start 0x3100 length 0x3100 00:25:27.648 COMP_lvs0/lv0 : 3.01 3544.33 13.85 0.00 0.00 8988.14 54.89 15938.36 00:25:27.648 =================================================================================================================== 00:25:27.648 Total : 7108.99 27.77 0.00 0.00 8963.96 54.48 16252.93 00:25:27.648 0 00:25:27.648 18:28:35 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:25:27.648 18:28:35 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:27.648 18:28:35 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:27.648 18:28:36 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:27.648 18:28:36 compress_isal -- compress/compress.sh@78 -- # killprocess 2332249 00:25:27.648 18:28:36 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 2332249 ']' 00:25:27.648 18:28:36 compress_isal -- common/autotest_common.sh@954 -- # kill -0 2332249 00:25:27.648 18:28:36 compress_isal -- common/autotest_common.sh@955 -- # uname 00:25:27.648 18:28:36 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:27.648 18:28:36 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2332249 00:25:27.648 18:28:36 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:27.648 18:28:36 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:27.648 18:28:36 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2332249' 00:25:27.648 killing process with pid 2332249 00:25:27.648 18:28:36 compress_isal -- common/autotest_common.sh@969 -- # kill 2332249 00:25:27.648 Received shutdown signal, test time was about 3.000000 seconds 00:25:27.648 00:25:27.648 Latency(us) 00:25:27.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.648 =================================================================================================================== 00:25:27.648 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.648 18:28:36 compress_isal -- common/autotest_common.sh@974 -- # wait 2332249 00:25:30.185 18:28:38 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:25:30.185 18:28:38 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:25:30.185 18:28:38 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=2334289 00:25:30.185 18:28:38 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:30.185 18:28:38 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:25:30.185 18:28:38 compress_isal -- compress/compress.sh@73 -- # waitforlisten 2334289 00:25:30.185 18:28:38 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 2334289 ']' 00:25:30.185 18:28:38 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.185 18:28:38 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:30.185 18:28:38 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.185 18:28:38 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:30.185 18:28:38 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:30.185 [2024-07-24 18:28:38.664808] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:25:30.185 [2024-07-24 18:28:38.664858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2334289 ] 00:25:30.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.185 EAL: Requested device 0000:b3:01.0 cannot be used 00:25:30.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.185 EAL: Requested device 0000:b3:01.1 cannot be used 00:25:30.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:01.2 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:01.3 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:01.4 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:01.5 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:01.6 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:01.7 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:02.0 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:02.1 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:02.2 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:02.3 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:02.4 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:02.5 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:02.6 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b3:02.7 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:01.0 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:01.1 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:01.2 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:01.3 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:01.4 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:01.5 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:01.6 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:01.7 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:02.0 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:02.1 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:02.2 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:02.3 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:02.4 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:02.5 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:02.6 cannot be used 00:25:30.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:30.186 EAL: Requested device 0000:b5:02.7 cannot be used 00:25:30.186 [2024-07-24 18:28:38.756536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:30.446 [2024-07-24 18:28:38.831936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.446 [2024-07-24 18:28:38.831939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.014 18:28:39 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:31.014 18:28:39 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:25:31.014 18:28:39 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:25:31.014 18:28:39 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:31.014 18:28:39 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:34.305 18:28:42 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:34.305 18:28:42 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:25:34.305 18:28:42 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:34.305 18:28:42 compress_isal -- common/autotest_common.sh@901 -- # local i 00:25:34.305 18:28:42 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:34.305 18:28:42 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:34.305 18:28:42 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:34.305 18:28:42 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:34.305 [ 00:25:34.305 { 00:25:34.305 "name": "Nvme0n1", 00:25:34.305 "aliases": [ 00:25:34.305 "7a1b13f5-38c7-452b-8aaf-9a83a2a98958" 00:25:34.305 ], 00:25:34.305 "product_name": "NVMe disk", 00:25:34.305 "block_size": 512, 00:25:34.305 "num_blocks": 3907029168, 00:25:34.305 "uuid": "7a1b13f5-38c7-452b-8aaf-9a83a2a98958", 00:25:34.305 "assigned_rate_limits": { 00:25:34.305 "rw_ios_per_sec": 0, 00:25:34.305 "rw_mbytes_per_sec": 0, 00:25:34.305 "r_mbytes_per_sec": 0, 00:25:34.305 "w_mbytes_per_sec": 0 00:25:34.305 }, 00:25:34.305 "claimed": false, 00:25:34.305 "zoned": false, 00:25:34.305 "supported_io_types": { 00:25:34.305 "read": true, 00:25:34.305 "write": true, 00:25:34.305 "unmap": true, 00:25:34.305 "flush": true, 00:25:34.305 "reset": true, 00:25:34.305 "nvme_admin": true, 00:25:34.305 "nvme_io": true, 00:25:34.305 "nvme_io_md": false, 00:25:34.305 "write_zeroes": true, 00:25:34.305 "zcopy": false, 00:25:34.305 "get_zone_info": false, 00:25:34.305 "zone_management": false, 00:25:34.305 "zone_append": false, 00:25:34.305 "compare": false, 00:25:34.305 "compare_and_write": false, 00:25:34.305 "abort": true, 00:25:34.305 "seek_hole": false, 00:25:34.305 "seek_data": false, 00:25:34.305 "copy": false, 00:25:34.305 "nvme_iov_md": false 00:25:34.305 }, 00:25:34.305 "driver_specific": { 00:25:34.305 "nvme": [ 00:25:34.305 { 00:25:34.305 "pci_address": "0000:d8:00.0", 00:25:34.305 "trid": { 00:25:34.305 "trtype": "PCIe", 00:25:34.305 "traddr": "0000:d8:00.0" 00:25:34.305 }, 00:25:34.305 "ctrlr_data": { 00:25:34.305 "cntlid": 0, 00:25:34.305 "vendor_id": "0x8086", 00:25:34.305 "model_number": "INTEL SSDPE2KX020T8", 00:25:34.305 "serial_number": "BTLJ125504VE2P0BGN", 00:25:34.305 "firmware_revision": "VDV10170", 00:25:34.305 "oacs": { 00:25:34.305 "security": 0, 00:25:34.305 "format": 1, 00:25:34.305 "firmware": 1, 00:25:34.305 "ns_manage": 1 00:25:34.305 }, 00:25:34.305 "multi_ctrlr": false, 00:25:34.305 "ana_reporting": false 00:25:34.305 }, 00:25:34.305 "vs": { 00:25:34.305 "nvme_version": "1.2" 00:25:34.305 }, 00:25:34.305 "ns_data": { 00:25:34.305 "id": 1, 00:25:34.305 "can_share": false 00:25:34.305 } 00:25:34.305 } 00:25:34.305 ], 00:25:34.305 "mp_policy": "active_passive" 00:25:34.305 } 00:25:34.305 } 00:25:34.305 ] 00:25:34.305 18:28:42 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:25:34.305 18:28:42 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:35.711 3b9aa3bc-6064-4364-a9ea-27bf4b7cb2bb 00:25:35.711 18:28:44 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:35.711 72e7686d-75ad-497a-89e9-864231636c90 00:25:35.711 18:28:44 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:35.711 18:28:44 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:25:35.711 18:28:44 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:35.711 18:28:44 compress_isal -- common/autotest_common.sh@901 -- # local i 00:25:35.711 18:28:44 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:35.711 18:28:44 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:35.711 18:28:44 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:35.971 18:28:44 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:35.971 [ 00:25:35.971 { 00:25:35.971 "name": "72e7686d-75ad-497a-89e9-864231636c90", 00:25:35.971 "aliases": [ 00:25:35.971 "lvs0/lv0" 00:25:35.971 ], 00:25:35.971 "product_name": "Logical Volume", 00:25:35.971 "block_size": 512, 00:25:35.971 "num_blocks": 204800, 00:25:35.971 "uuid": "72e7686d-75ad-497a-89e9-864231636c90", 00:25:35.971 "assigned_rate_limits": { 00:25:35.971 "rw_ios_per_sec": 0, 00:25:35.971 "rw_mbytes_per_sec": 0, 00:25:35.971 "r_mbytes_per_sec": 0, 00:25:35.971 "w_mbytes_per_sec": 0 00:25:35.971 }, 00:25:35.971 "claimed": false, 00:25:35.971 "zoned": false, 00:25:35.971 "supported_io_types": { 00:25:35.971 "read": true, 00:25:35.971 "write": true, 00:25:35.971 "unmap": true, 00:25:35.971 "flush": false, 00:25:35.971 "reset": true, 00:25:35.971 "nvme_admin": false, 00:25:35.971 "nvme_io": false, 00:25:35.971 "nvme_io_md": false, 00:25:35.971 "write_zeroes": true, 00:25:35.971 "zcopy": false, 00:25:35.971 "get_zone_info": false, 00:25:35.971 "zone_management": false, 00:25:35.971 "zone_append": false, 00:25:35.971 "compare": false, 00:25:35.971 "compare_and_write": false, 00:25:35.971 "abort": false, 00:25:35.971 "seek_hole": true, 00:25:35.971 "seek_data": true, 00:25:35.971 "copy": false, 00:25:35.971 "nvme_iov_md": false 00:25:35.971 }, 00:25:35.971 "driver_specific": { 00:25:35.971 "lvol": { 00:25:35.971 "lvol_store_uuid": "3b9aa3bc-6064-4364-a9ea-27bf4b7cb2bb", 00:25:35.971 "base_bdev": "Nvme0n1", 00:25:35.971 "thin_provision": true, 00:25:35.971 "num_allocated_clusters": 0, 00:25:35.971 "snapshot": false, 00:25:35.971 "clone": false, 00:25:35.971 "esnap_clone": false 00:25:35.971 } 00:25:35.971 } 00:25:35.971 } 00:25:35.971 ] 00:25:35.971 18:28:44 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:25:35.971 18:28:44 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:25:35.971 18:28:44 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:25:36.230 [2024-07-24 18:28:44.697818] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:36.230 COMP_lvs0/lv0 00:25:36.230 18:28:44 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:36.230 18:28:44 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:25:36.230 18:28:44 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:36.230 18:28:44 compress_isal -- common/autotest_common.sh@901 -- # local i 00:25:36.230 18:28:44 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:36.230 18:28:44 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:36.230 18:28:44 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:36.489 18:28:44 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:36.489 [ 00:25:36.489 { 00:25:36.489 "name": "COMP_lvs0/lv0", 00:25:36.489 "aliases": [ 00:25:36.489 "6b852fb9-da10-5ff2-9032-6a46d297431c" 00:25:36.489 ], 00:25:36.489 "product_name": "compress", 00:25:36.489 "block_size": 4096, 00:25:36.489 "num_blocks": 25088, 00:25:36.489 "uuid": "6b852fb9-da10-5ff2-9032-6a46d297431c", 00:25:36.489 "assigned_rate_limits": { 00:25:36.489 "rw_ios_per_sec": 0, 00:25:36.489 "rw_mbytes_per_sec": 0, 00:25:36.489 "r_mbytes_per_sec": 0, 00:25:36.489 "w_mbytes_per_sec": 0 00:25:36.489 }, 00:25:36.489 "claimed": false, 00:25:36.489 "zoned": false, 00:25:36.489 "supported_io_types": { 00:25:36.489 "read": true, 00:25:36.489 "write": true, 00:25:36.489 "unmap": false, 00:25:36.489 "flush": false, 00:25:36.489 "reset": false, 00:25:36.489 "nvme_admin": false, 00:25:36.489 "nvme_io": false, 00:25:36.489 "nvme_io_md": false, 00:25:36.489 "write_zeroes": true, 00:25:36.489 "zcopy": false, 00:25:36.489 "get_zone_info": false, 00:25:36.489 "zone_management": false, 00:25:36.489 "zone_append": false, 00:25:36.489 "compare": false, 00:25:36.489 "compare_and_write": false, 00:25:36.489 "abort": false, 00:25:36.489 "seek_hole": false, 00:25:36.489 "seek_data": false, 00:25:36.489 "copy": false, 00:25:36.489 "nvme_iov_md": false 00:25:36.489 }, 00:25:36.489 "driver_specific": { 00:25:36.489 "compress": { 00:25:36.489 "name": "COMP_lvs0/lv0", 00:25:36.489 "base_bdev_name": "72e7686d-75ad-497a-89e9-864231636c90", 00:25:36.489 "pm_path": "/tmp/pmem/753e5de8-1501-4b80-9ff4-806961401c36" 00:25:36.489 } 00:25:36.489 } 00:25:36.489 } 00:25:36.489 ] 00:25:36.489 18:28:45 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:25:36.489 18:28:45 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:36.749 Running I/O for 3 seconds... 00:25:40.040 00:25:40.040 Latency(us) 00:25:40.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.040 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:25:40.040 Verification LBA range: start 0x0 length 0x3100 00:25:40.040 COMP_lvs0/lv0 : 3.01 3557.00 13.89 0.00 0.00 8958.97 56.12 15414.07 00:25:40.040 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:25:40.040 Verification LBA range: start 0x3100 length 0x3100 00:25:40.040 COMP_lvs0/lv0 : 3.01 3584.32 14.00 0.00 0.00 8887.30 56.12 15833.50 00:25:40.040 =================================================================================================================== 00:25:40.040 Total : 7141.32 27.90 0.00 0.00 8922.99 56.12 15833.50 00:25:40.040 0 00:25:40.040 18:28:48 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:25:40.040 18:28:48 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:40.040 18:28:48 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:40.040 18:28:48 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:40.040 18:28:48 compress_isal -- compress/compress.sh@78 -- # killprocess 2334289 00:25:40.040 18:28:48 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 2334289 ']' 00:25:40.040 18:28:48 compress_isal -- common/autotest_common.sh@954 -- # kill -0 2334289 00:25:40.040 18:28:48 compress_isal -- common/autotest_common.sh@955 -- # uname 00:25:40.040 18:28:48 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:40.040 18:28:48 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2334289 00:25:40.040 18:28:48 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:40.040 18:28:48 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:40.040 18:28:48 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2334289' 00:25:40.040 killing process with pid 2334289 00:25:40.040 18:28:48 compress_isal -- common/autotest_common.sh@969 -- # kill 2334289 00:25:40.040 Received shutdown signal, test time was about 3.000000 seconds 00:25:40.040 00:25:40.040 Latency(us) 00:25:40.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.040 =================================================================================================================== 00:25:40.040 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:40.040 18:28:48 compress_isal -- common/autotest_common.sh@974 -- # wait 2334289 00:25:42.579 18:28:50 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:25:42.579 18:28:50 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:25:42.579 18:28:50 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=2336448 00:25:42.579 18:28:50 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:25:42.579 18:28:50 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:42.579 18:28:50 compress_isal -- compress/compress.sh@57 -- # waitforlisten 2336448 00:25:42.579 18:28:50 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 2336448 ']' 00:25:42.579 18:28:50 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.579 18:28:50 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:42.579 18:28:50 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.579 18:28:50 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:42.579 18:28:50 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:42.579 [2024-07-24 18:28:51.041351] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:25:42.579 [2024-07-24 18:28:51.041402] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2336448 ] 00:25:42.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.579 EAL: Requested device 0000:b3:01.0 cannot be used 00:25:42.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.579 EAL: Requested device 0000:b3:01.1 cannot be used 00:25:42.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.579 EAL: Requested device 0000:b3:01.2 cannot be used 00:25:42.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.579 EAL: Requested device 0000:b3:01.3 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b3:01.4 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b3:01.5 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b3:01.6 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b3:01.7 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b3:02.0 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b3:02.1 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b3:02.2 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b3:02.3 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b3:02.4 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b3:02.5 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b3:02.6 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b3:02.7 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:01.0 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:01.1 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:01.2 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:01.3 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:01.4 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:01.5 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:01.6 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:01.7 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:02.0 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:02.1 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:02.2 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:02.3 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:02.4 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:02.5 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:02.6 cannot be used 00:25:42.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.580 EAL: Requested device 0000:b5:02.7 cannot be used 00:25:42.580 [2024-07-24 18:28:51.132090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:42.838 [2024-07-24 18:28:51.206971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.838 [2024-07-24 18:28:51.207066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.838 [2024-07-24 18:28:51.207069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.405 18:28:51 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:43.405 18:28:51 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:25:43.405 18:28:51 compress_isal -- compress/compress.sh@58 -- # create_vols 00:25:43.405 18:28:51 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:43.405 18:28:51 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:46.693 18:28:54 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:46.693 18:28:54 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:25:46.693 18:28:54 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:46.693 18:28:54 compress_isal -- common/autotest_common.sh@901 -- # local i 00:25:46.693 18:28:54 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:46.693 18:28:54 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:46.693 18:28:54 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:46.693 18:28:55 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:46.693 [ 00:25:46.693 { 00:25:46.693 "name": "Nvme0n1", 00:25:46.693 "aliases": [ 00:25:46.693 "5d8472ee-4572-4640-a90e-be9fa3e03d72" 00:25:46.693 ], 00:25:46.693 "product_name": "NVMe disk", 00:25:46.693 "block_size": 512, 00:25:46.693 "num_blocks": 3907029168, 00:25:46.693 "uuid": "5d8472ee-4572-4640-a90e-be9fa3e03d72", 00:25:46.693 "assigned_rate_limits": { 00:25:46.693 "rw_ios_per_sec": 0, 00:25:46.693 "rw_mbytes_per_sec": 0, 00:25:46.693 "r_mbytes_per_sec": 0, 00:25:46.693 "w_mbytes_per_sec": 0 00:25:46.693 }, 00:25:46.693 "claimed": false, 00:25:46.693 "zoned": false, 00:25:46.693 "supported_io_types": { 00:25:46.693 "read": true, 00:25:46.693 "write": true, 00:25:46.693 "unmap": true, 00:25:46.693 "flush": true, 00:25:46.693 "reset": true, 00:25:46.693 "nvme_admin": true, 00:25:46.693 "nvme_io": true, 00:25:46.693 "nvme_io_md": false, 00:25:46.693 "write_zeroes": true, 00:25:46.693 "zcopy": false, 00:25:46.693 "get_zone_info": false, 00:25:46.693 "zone_management": false, 00:25:46.693 "zone_append": false, 00:25:46.693 "compare": false, 00:25:46.693 "compare_and_write": false, 00:25:46.693 "abort": true, 00:25:46.693 "seek_hole": false, 00:25:46.693 "seek_data": false, 00:25:46.693 "copy": false, 00:25:46.693 "nvme_iov_md": false 00:25:46.693 }, 00:25:46.693 "driver_specific": { 00:25:46.693 "nvme": [ 00:25:46.693 { 00:25:46.693 "pci_address": "0000:d8:00.0", 00:25:46.693 "trid": { 00:25:46.693 "trtype": "PCIe", 00:25:46.693 "traddr": "0000:d8:00.0" 00:25:46.693 }, 00:25:46.693 "ctrlr_data": { 00:25:46.693 "cntlid": 0, 00:25:46.693 "vendor_id": "0x8086", 00:25:46.693 "model_number": "INTEL SSDPE2KX020T8", 00:25:46.693 "serial_number": "BTLJ125504VE2P0BGN", 00:25:46.693 "firmware_revision": "VDV10170", 00:25:46.693 "oacs": { 00:25:46.693 "security": 0, 00:25:46.693 "format": 1, 00:25:46.693 "firmware": 1, 00:25:46.693 "ns_manage": 1 00:25:46.693 }, 00:25:46.693 "multi_ctrlr": false, 00:25:46.693 "ana_reporting": false 00:25:46.693 }, 00:25:46.693 "vs": { 00:25:46.693 "nvme_version": "1.2" 00:25:46.693 }, 00:25:46.693 "ns_data": { 00:25:46.693 "id": 1, 00:25:46.693 "can_share": false 00:25:46.693 } 00:25:46.693 } 00:25:46.693 ], 00:25:46.693 "mp_policy": "active_passive" 00:25:46.693 } 00:25:46.693 } 00:25:46.693 ] 00:25:46.693 18:28:55 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:25:46.693 18:28:55 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:48.069 e1ea3da8-8aa6-49ea-bf2a-58421388bba1 00:25:48.069 18:28:56 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:48.069 e39bb5c1-81ca-4f90-9173-1bf5df05683f 00:25:48.069 18:28:56 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:48.069 18:28:56 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:25:48.069 18:28:56 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:48.069 18:28:56 compress_isal -- common/autotest_common.sh@901 -- # local i 00:25:48.069 18:28:56 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:48.069 18:28:56 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:48.069 18:28:56 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:48.329 18:28:56 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:48.329 [ 00:25:48.329 { 00:25:48.329 "name": "e39bb5c1-81ca-4f90-9173-1bf5df05683f", 00:25:48.329 "aliases": [ 00:25:48.329 "lvs0/lv0" 00:25:48.329 ], 00:25:48.329 "product_name": "Logical Volume", 00:25:48.329 "block_size": 512, 00:25:48.329 "num_blocks": 204800, 00:25:48.329 "uuid": "e39bb5c1-81ca-4f90-9173-1bf5df05683f", 00:25:48.329 "assigned_rate_limits": { 00:25:48.329 "rw_ios_per_sec": 0, 00:25:48.329 "rw_mbytes_per_sec": 0, 00:25:48.329 "r_mbytes_per_sec": 0, 00:25:48.329 "w_mbytes_per_sec": 0 00:25:48.329 }, 00:25:48.329 "claimed": false, 00:25:48.329 "zoned": false, 00:25:48.329 "supported_io_types": { 00:25:48.329 "read": true, 00:25:48.329 "write": true, 00:25:48.329 "unmap": true, 00:25:48.329 "flush": false, 00:25:48.329 "reset": true, 00:25:48.329 "nvme_admin": false, 00:25:48.329 "nvme_io": false, 00:25:48.329 "nvme_io_md": false, 00:25:48.329 "write_zeroes": true, 00:25:48.329 "zcopy": false, 00:25:48.329 "get_zone_info": false, 00:25:48.329 "zone_management": false, 00:25:48.329 "zone_append": false, 00:25:48.329 "compare": false, 00:25:48.329 "compare_and_write": false, 00:25:48.329 "abort": false, 00:25:48.329 "seek_hole": true, 00:25:48.329 "seek_data": true, 00:25:48.329 "copy": false, 00:25:48.329 "nvme_iov_md": false 00:25:48.329 }, 00:25:48.329 "driver_specific": { 00:25:48.329 "lvol": { 00:25:48.329 "lvol_store_uuid": "e1ea3da8-8aa6-49ea-bf2a-58421388bba1", 00:25:48.329 "base_bdev": "Nvme0n1", 00:25:48.329 "thin_provision": true, 00:25:48.329 "num_allocated_clusters": 0, 00:25:48.329 "snapshot": false, 00:25:48.329 "clone": false, 00:25:48.329 "esnap_clone": false 00:25:48.329 } 00:25:48.329 } 00:25:48.329 } 00:25:48.329 ] 00:25:48.588 18:28:56 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:25:48.588 18:28:56 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:25:48.588 18:28:56 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:25:48.588 [2024-07-24 18:28:57.089094] vbdev_compress.c: 999:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:48.588 COMP_lvs0/lv0 00:25:48.588 18:28:57 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:48.588 18:28:57 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:25:48.588 18:28:57 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:48.588 18:28:57 compress_isal -- common/autotest_common.sh@901 -- # local i 00:25:48.588 18:28:57 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:48.588 18:28:57 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:48.588 18:28:57 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:48.847 18:28:57 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:48.847 [ 00:25:48.847 { 00:25:48.847 "name": "COMP_lvs0/lv0", 00:25:48.847 "aliases": [ 00:25:48.847 "9f6a7b3a-8015-5e06-9803-c2c26da47331" 00:25:48.847 ], 00:25:48.847 "product_name": "compress", 00:25:48.847 "block_size": 512, 00:25:48.847 "num_blocks": 200704, 00:25:48.847 "uuid": "9f6a7b3a-8015-5e06-9803-c2c26da47331", 00:25:48.847 "assigned_rate_limits": { 00:25:48.847 "rw_ios_per_sec": 0, 00:25:48.847 "rw_mbytes_per_sec": 0, 00:25:48.847 "r_mbytes_per_sec": 0, 00:25:48.847 "w_mbytes_per_sec": 0 00:25:48.847 }, 00:25:48.847 "claimed": false, 00:25:48.847 "zoned": false, 00:25:48.847 "supported_io_types": { 00:25:48.847 "read": true, 00:25:48.847 "write": true, 00:25:48.847 "unmap": false, 00:25:48.847 "flush": false, 00:25:48.847 "reset": false, 00:25:48.847 "nvme_admin": false, 00:25:48.847 "nvme_io": false, 00:25:48.847 "nvme_io_md": false, 00:25:48.847 "write_zeroes": true, 00:25:48.847 "zcopy": false, 00:25:48.847 "get_zone_info": false, 00:25:48.847 "zone_management": false, 00:25:48.847 "zone_append": false, 00:25:48.847 "compare": false, 00:25:48.847 "compare_and_write": false, 00:25:48.847 "abort": false, 00:25:48.847 "seek_hole": false, 00:25:48.847 "seek_data": false, 00:25:48.847 "copy": false, 00:25:48.847 "nvme_iov_md": false 00:25:48.847 }, 00:25:48.847 "driver_specific": { 00:25:48.847 "compress": { 00:25:48.847 "name": "COMP_lvs0/lv0", 00:25:48.847 "base_bdev_name": "e39bb5c1-81ca-4f90-9173-1bf5df05683f", 00:25:48.847 "pm_path": "/tmp/pmem/a99e2de7-398b-4015-b5cb-4ac7e49ce974" 00:25:48.847 } 00:25:48.847 } 00:25:48.847 } 00:25:48.847 ] 00:25:48.847 18:28:57 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:25:48.847 18:28:57 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:49.105 I/O targets: 00:25:49.105 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:25:49.105 00:25:49.105 00:25:49.105 CUnit - A unit testing framework for C - Version 2.1-3 00:25:49.105 http://cunit.sourceforge.net/ 00:25:49.105 00:25:49.105 00:25:49.105 Suite: bdevio tests on: COMP_lvs0/lv0 00:25:49.105 Test: blockdev write read block ...passed 00:25:49.105 Test: blockdev write zeroes read block ...passed 00:25:49.105 Test: blockdev write zeroes read no split ...passed 00:25:49.105 Test: blockdev write zeroes read split ...passed 00:25:49.105 Test: blockdev write zeroes read split partial ...passed 00:25:49.105 Test: blockdev reset ...[2024-07-24 18:28:57.582026] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:25:49.105 passed 00:25:49.106 Test: blockdev write read 8 blocks ...passed 00:25:49.106 Test: blockdev write read size > 128k ...passed 00:25:49.106 Test: blockdev write read invalid size ...passed 00:25:49.106 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:49.106 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:49.106 Test: blockdev write read max offset ...passed 00:25:49.106 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:49.106 Test: blockdev writev readv 8 blocks ...passed 00:25:49.106 Test: blockdev writev readv 30 x 1block ...passed 00:25:49.106 Test: blockdev writev readv block ...passed 00:25:49.106 Test: blockdev writev readv size > 128k ...passed 00:25:49.106 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:49.106 Test: blockdev comparev and writev ...passed 00:25:49.106 Test: blockdev nvme passthru rw ...passed 00:25:49.106 Test: blockdev nvme passthru vendor specific ...passed 00:25:49.106 Test: blockdev nvme admin passthru ...passed 00:25:49.106 Test: blockdev copy ...passed 00:25:49.106 00:25:49.106 Run Summary: Type Total Ran Passed Failed Inactive 00:25:49.106 suites 1 1 n/a 0 0 00:25:49.106 tests 23 23 23 0 0 00:25:49.106 asserts 130 130 130 0 n/a 00:25:49.106 00:25:49.106 Elapsed time = 0.241 seconds 00:25:49.106 0 00:25:49.106 18:28:57 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:25:49.106 18:28:57 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:49.364 18:28:57 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:49.621 18:28:57 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:25:49.621 18:28:57 compress_isal -- compress/compress.sh@62 -- # killprocess 2336448 00:25:49.621 18:28:57 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 2336448 ']' 00:25:49.621 18:28:57 compress_isal -- common/autotest_common.sh@954 -- # kill -0 2336448 00:25:49.621 18:28:57 compress_isal -- common/autotest_common.sh@955 -- # uname 00:25:49.621 18:28:57 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:49.621 18:28:57 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2336448 00:25:49.621 18:28:58 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:49.621 18:28:58 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:49.621 18:28:58 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2336448' 00:25:49.621 killing process with pid 2336448 00:25:49.621 18:28:58 compress_isal -- common/autotest_common.sh@969 -- # kill 2336448 00:25:49.621 18:28:58 compress_isal -- common/autotest_common.sh@974 -- # wait 2336448 00:25:52.153 18:29:00 compress_isal -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:25:52.153 18:29:00 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:25:52.153 00:25:52.153 real 0m46.789s 00:25:52.153 user 1m44.848s 00:25:52.153 sys 0m3.488s 00:25:52.153 18:29:00 compress_isal -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:52.153 18:29:00 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:52.153 ************************************ 00:25:52.153 END TEST compress_isal 00:25:52.153 ************************************ 00:25:52.153 18:29:00 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:25:52.153 18:29:00 -- spdk/autotest.sh@360 -- # '[' 1 -eq 1 ']' 00:25:52.153 18:29:00 -- spdk/autotest.sh@361 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:25:52.153 18:29:00 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:52.153 18:29:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:52.153 18:29:00 -- common/autotest_common.sh@10 -- # set +x 00:25:52.153 ************************************ 00:25:52.153 START TEST blockdev_crypto_aesni 00:25:52.153 ************************************ 00:25:52.153 18:29:00 blockdev_crypto_aesni -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:25:52.153 * Looking for test storage... 00:25:52.153 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # uname -s 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@681 -- # test_type=crypto_aesni 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # crypto_device= 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # dek= 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # env_ctx= 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == bdev ]] 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == crypto_* ]] 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2338115 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 2338115 00:25:52.153 18:29:00 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:25:52.153 18:29:00 blockdev_crypto_aesni -- common/autotest_common.sh@831 -- # '[' -z 2338115 ']' 00:25:52.153 18:29:00 blockdev_crypto_aesni -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.153 18:29:00 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:52.153 18:29:00 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.153 18:29:00 blockdev_crypto_aesni -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:52.153 18:29:00 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:52.153 [2024-07-24 18:29:00.724962] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:25:52.153 [2024-07-24 18:29:00.725014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2338115 ] 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:01.0 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:01.1 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:01.2 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:01.3 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:01.4 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:01.5 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:01.6 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:01.7 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:02.0 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:02.1 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:02.2 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:02.3 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:02.4 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:02.5 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:02.6 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b3:02.7 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b5:01.0 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b5:01.1 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b5:01.2 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b5:01.3 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b5:01.4 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b5:01.5 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.412 EAL: Requested device 0000:b5:01.6 cannot be used 00:25:52.412 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.413 EAL: Requested device 0000:b5:01.7 cannot be used 00:25:52.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.413 EAL: Requested device 0000:b5:02.0 cannot be used 00:25:52.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.413 EAL: Requested device 0000:b5:02.1 cannot be used 00:25:52.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.413 EAL: Requested device 0000:b5:02.2 cannot be used 00:25:52.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.413 EAL: Requested device 0000:b5:02.3 cannot be used 00:25:52.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.413 EAL: Requested device 0000:b5:02.4 cannot be used 00:25:52.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.413 EAL: Requested device 0000:b5:02.5 cannot be used 00:25:52.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.413 EAL: Requested device 0000:b5:02.6 cannot be used 00:25:52.413 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:52.413 EAL: Requested device 0000:b5:02.7 cannot be used 00:25:52.413 [2024-07-24 18:29:00.818173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.413 [2024-07-24 18:29:00.891051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.981 18:29:01 blockdev_crypto_aesni -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:52.981 18:29:01 blockdev_crypto_aesni -- common/autotest_common.sh@864 -- # return 0 00:25:52.981 18:29:01 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:25:52.981 18:29:01 blockdev_crypto_aesni -- bdev/blockdev.sh@704 -- # setup_crypto_aesni_conf 00:25:52.981 18:29:01 blockdev_crypto_aesni -- bdev/blockdev.sh@145 -- # rpc_cmd 00:25:52.981 18:29:01 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.981 18:29:01 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:52.981 [2024-07-24 18:29:01.524950] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:52.981 [2024-07-24 18:29:01.532979] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:52.981 [2024-07-24 18:29:01.540997] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:53.240 [2024-07-24 18:29:01.605230] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:55.768 true 00:25:55.768 true 00:25:55.768 true 00:25:55.768 true 00:25:55.768 Malloc0 00:25:55.768 Malloc1 00:25:55.768 Malloc2 00:25:55.768 Malloc3 00:25:55.768 [2024-07-24 18:29:03.913280] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:55.768 crypto_ram 00:25:55.768 [2024-07-24 18:29:03.921300] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:55.768 crypto_ram2 00:25:55.768 [2024-07-24 18:29:03.929321] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:55.768 crypto_ram3 00:25:55.768 [2024-07-24 18:29:03.937343] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:55.768 crypto_ram4 00:25:55.768 18:29:03 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.768 18:29:03 blockdev_crypto_aesni -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:25:55.768 18:29:03 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.768 18:29:03 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:55.768 18:29:03 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.768 18:29:03 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # cat 00:25:55.768 18:29:03 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:25:55.768 18:29:03 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.768 18:29:03 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:55.768 18:29:03 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.768 18:29:03 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:25:55.768 18:29:03 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.768 18:29:03 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.768 18:29:04 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.768 18:29:04 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:25:55.768 18:29:04 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:25:55.768 18:29:04 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.768 18:29:04 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:25:55.768 18:29:04 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r .name 00:25:55.768 18:29:04 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "c86be9f0-669a-5df6-945c-fa58dff6ed7e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c86be9f0-669a-5df6-945c-fa58dff6ed7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "c80677b2-a282-5aff-a1dc-45746f010739"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c80677b2-a282-5aff-a1dc-45746f010739",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "fd4f5915-fba9-5158-9040-a9a2e160e61f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "fd4f5915-fba9-5158-9040-a9a2e160e61f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "a88bcc4d-9295-5420-ac7a-c245ac860a87"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "a88bcc4d-9295-5420-ac7a-c245ac860a87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:25:55.768 18:29:04 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:25:55.768 18:29:04 blockdev_crypto_aesni -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:25:55.768 18:29:04 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:25:55.768 18:29:04 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # killprocess 2338115 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@950 -- # '[' -z 2338115 ']' 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # kill -0 2338115 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # uname 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2338115 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2338115' 00:25:55.768 killing process with pid 2338115 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@969 -- # kill 2338115 00:25:55.768 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@974 -- # wait 2338115 00:25:56.405 18:29:04 blockdev_crypto_aesni -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:56.405 18:29:04 blockdev_crypto_aesni -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:25:56.405 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:25:56.405 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:56.405 18:29:04 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:56.405 ************************************ 00:25:56.405 START TEST bdev_hello_world 00:25:56.405 ************************************ 00:25:56.405 18:29:04 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:25:56.405 [2024-07-24 18:29:04.751238] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:25:56.405 [2024-07-24 18:29:04.751282] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2338838 ] 00:25:56.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.405 EAL: Requested device 0000:b3:01.0 cannot be used 00:25:56.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:01.1 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:01.2 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:01.3 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:01.4 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:01.5 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:01.6 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:01.7 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:02.0 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:02.1 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:02.2 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:02.3 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:02.4 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:02.5 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:02.6 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b3:02.7 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:01.0 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:01.1 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:01.2 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:01.3 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:01.4 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:01.5 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:01.6 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:01.7 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:02.0 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:02.1 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:02.2 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:02.3 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:02.4 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:02.5 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:02.6 cannot be used 00:25:56.406 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:56.406 EAL: Requested device 0000:b5:02.7 cannot be used 00:25:56.406 [2024-07-24 18:29:04.840690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.406 [2024-07-24 18:29:04.910330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.406 [2024-07-24 18:29:04.931212] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:56.406 [2024-07-24 18:29:04.939237] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:56.406 [2024-07-24 18:29:04.947254] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:56.664 [2024-07-24 18:29:05.044390] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:59.199 [2024-07-24 18:29:07.203133] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:59.199 [2024-07-24 18:29:07.203188] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:25:59.199 [2024-07-24 18:29:07.203198] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:59.199 [2024-07-24 18:29:07.211154] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:59.199 [2024-07-24 18:29:07.211167] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:25:59.199 [2024-07-24 18:29:07.211174] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:59.199 [2024-07-24 18:29:07.219172] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:59.199 [2024-07-24 18:29:07.219183] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:25:59.200 [2024-07-24 18:29:07.219190] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:59.200 [2024-07-24 18:29:07.227192] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:59.200 [2024-07-24 18:29:07.227203] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:25:59.200 [2024-07-24 18:29:07.227210] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:59.200 [2024-07-24 18:29:07.294864] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:59.200 [2024-07-24 18:29:07.294898] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:25:59.200 [2024-07-24 18:29:07.294911] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:59.200 [2024-07-24 18:29:07.295783] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:59.200 [2024-07-24 18:29:07.295836] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:59.200 [2024-07-24 18:29:07.295847] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:59.200 [2024-07-24 18:29:07.295877] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:59.200 00:25:59.200 [2024-07-24 18:29:07.295890] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:59.200 00:25:59.200 real 0m2.903s 00:25:59.200 user 0m2.586s 00:25:59.200 sys 0m0.290s 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:25:59.200 ************************************ 00:25:59.200 END TEST bdev_hello_world 00:25:59.200 ************************************ 00:25:59.200 18:29:07 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:25:59.200 18:29:07 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:59.200 18:29:07 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:59.200 18:29:07 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:59.200 ************************************ 00:25:59.200 START TEST bdev_bounds 00:25:59.200 ************************************ 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=2339304 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 2339304' 00:25:59.200 Process bdevio pid: 2339304 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 2339304 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 2339304 ']' 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.200 18:29:07 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:59.200 [2024-07-24 18:29:07.736196] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:25:59.200 [2024-07-24 18:29:07.736236] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2339304 ] 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:01.0 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:01.1 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:01.2 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:01.3 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:01.4 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:01.5 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:01.6 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:01.7 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:02.0 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:02.1 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:02.2 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:02.3 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:02.4 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:02.5 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:02.6 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b3:02.7 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:01.0 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:01.1 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:01.2 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:01.3 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:01.4 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:01.5 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:01.6 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:01.7 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:02.0 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:02.1 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:02.2 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:02.3 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:02.4 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:02.5 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:02.6 cannot be used 00:25:59.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:59.200 EAL: Requested device 0000:b5:02.7 cannot be used 00:25:59.459 [2024-07-24 18:29:07.829941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:59.459 [2024-07-24 18:29:07.906261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.459 [2024-07-24 18:29:07.906355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.459 [2024-07-24 18:29:07.906357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.459 [2024-07-24 18:29:07.927438] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:59.459 [2024-07-24 18:29:07.935469] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:59.459 [2024-07-24 18:29:07.943490] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:59.459 [2024-07-24 18:29:08.040665] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:01.992 [2024-07-24 18:29:10.209802] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:01.992 [2024-07-24 18:29:10.209857] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:01.992 [2024-07-24 18:29:10.209867] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:01.992 [2024-07-24 18:29:10.217817] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:01.992 [2024-07-24 18:29:10.217833] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:01.992 [2024-07-24 18:29:10.217840] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:01.992 [2024-07-24 18:29:10.225839] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:01.992 [2024-07-24 18:29:10.225850] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:01.992 [2024-07-24 18:29:10.225858] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:01.992 [2024-07-24 18:29:10.233861] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:01.992 [2024-07-24 18:29:10.233872] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:01.992 [2024-07-24 18:29:10.233879] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:01.992 18:29:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:01.992 18:29:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:26:01.992 18:29:10 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:01.992 I/O targets: 00:26:01.992 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:26:01.992 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:26:01.992 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:26:01.992 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:26:01.992 00:26:01.992 00:26:01.992 CUnit - A unit testing framework for C - Version 2.1-3 00:26:01.992 http://cunit.sourceforge.net/ 00:26:01.992 00:26:01.992 00:26:01.992 Suite: bdevio tests on: crypto_ram4 00:26:01.992 Test: blockdev write read block ...passed 00:26:01.992 Test: blockdev write zeroes read block ...passed 00:26:01.992 Test: blockdev write zeroes read no split ...passed 00:26:01.992 Test: blockdev write zeroes read split ...passed 00:26:01.992 Test: blockdev write zeroes read split partial ...passed 00:26:01.992 Test: blockdev reset ...passed 00:26:01.992 Test: blockdev write read 8 blocks ...passed 00:26:01.992 Test: blockdev write read size > 128k ...passed 00:26:01.992 Test: blockdev write read invalid size ...passed 00:26:01.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:01.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:01.992 Test: blockdev write read max offset ...passed 00:26:01.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:01.992 Test: blockdev writev readv 8 blocks ...passed 00:26:01.992 Test: blockdev writev readv 30 x 1block ...passed 00:26:01.992 Test: blockdev writev readv block ...passed 00:26:01.992 Test: blockdev writev readv size > 128k ...passed 00:26:01.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:01.992 Test: blockdev comparev and writev ...passed 00:26:01.992 Test: blockdev nvme passthru rw ...passed 00:26:01.992 Test: blockdev nvme passthru vendor specific ...passed 00:26:01.992 Test: blockdev nvme admin passthru ...passed 00:26:01.992 Test: blockdev copy ...passed 00:26:01.992 Suite: bdevio tests on: crypto_ram3 00:26:01.992 Test: blockdev write read block ...passed 00:26:01.992 Test: blockdev write zeroes read block ...passed 00:26:01.992 Test: blockdev write zeroes read no split ...passed 00:26:01.992 Test: blockdev write zeroes read split ...passed 00:26:01.992 Test: blockdev write zeroes read split partial ...passed 00:26:01.992 Test: blockdev reset ...passed 00:26:01.992 Test: blockdev write read 8 blocks ...passed 00:26:01.992 Test: blockdev write read size > 128k ...passed 00:26:01.992 Test: blockdev write read invalid size ...passed 00:26:01.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:01.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:01.992 Test: blockdev write read max offset ...passed 00:26:01.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:01.992 Test: blockdev writev readv 8 blocks ...passed 00:26:01.992 Test: blockdev writev readv 30 x 1block ...passed 00:26:01.992 Test: blockdev writev readv block ...passed 00:26:01.992 Test: blockdev writev readv size > 128k ...passed 00:26:01.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:01.992 Test: blockdev comparev and writev ...passed 00:26:01.993 Test: blockdev nvme passthru rw ...passed 00:26:01.993 Test: blockdev nvme passthru vendor specific ...passed 00:26:01.993 Test: blockdev nvme admin passthru ...passed 00:26:01.993 Test: blockdev copy ...passed 00:26:01.993 Suite: bdevio tests on: crypto_ram2 00:26:01.993 Test: blockdev write read block ...passed 00:26:01.993 Test: blockdev write zeroes read block ...passed 00:26:01.993 Test: blockdev write zeroes read no split ...passed 00:26:01.993 Test: blockdev write zeroes read split ...passed 00:26:01.993 Test: blockdev write zeroes read split partial ...passed 00:26:01.993 Test: blockdev reset ...passed 00:26:01.993 Test: blockdev write read 8 blocks ...passed 00:26:01.993 Test: blockdev write read size > 128k ...passed 00:26:01.993 Test: blockdev write read invalid size ...passed 00:26:01.993 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:01.993 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:01.993 Test: blockdev write read max offset ...passed 00:26:01.993 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:01.993 Test: blockdev writev readv 8 blocks ...passed 00:26:01.993 Test: blockdev writev readv 30 x 1block ...passed 00:26:01.993 Test: blockdev writev readv block ...passed 00:26:01.993 Test: blockdev writev readv size > 128k ...passed 00:26:01.993 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:01.993 Test: blockdev comparev and writev ...passed 00:26:01.993 Test: blockdev nvme passthru rw ...passed 00:26:01.993 Test: blockdev nvme passthru vendor specific ...passed 00:26:01.993 Test: blockdev nvme admin passthru ...passed 00:26:01.993 Test: blockdev copy ...passed 00:26:01.993 Suite: bdevio tests on: crypto_ram 00:26:01.993 Test: blockdev write read block ...passed 00:26:01.993 Test: blockdev write zeroes read block ...passed 00:26:01.993 Test: blockdev write zeroes read no split ...passed 00:26:02.251 Test: blockdev write zeroes read split ...passed 00:26:02.251 Test: blockdev write zeroes read split partial ...passed 00:26:02.251 Test: blockdev reset ...passed 00:26:02.251 Test: blockdev write read 8 blocks ...passed 00:26:02.251 Test: blockdev write read size > 128k ...passed 00:26:02.251 Test: blockdev write read invalid size ...passed 00:26:02.251 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:02.251 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:02.251 Test: blockdev write read max offset ...passed 00:26:02.251 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:02.251 Test: blockdev writev readv 8 blocks ...passed 00:26:02.251 Test: blockdev writev readv 30 x 1block ...passed 00:26:02.251 Test: blockdev writev readv block ...passed 00:26:02.251 Test: blockdev writev readv size > 128k ...passed 00:26:02.251 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:02.251 Test: blockdev comparev and writev ...passed 00:26:02.251 Test: blockdev nvme passthru rw ...passed 00:26:02.251 Test: blockdev nvme passthru vendor specific ...passed 00:26:02.251 Test: blockdev nvme admin passthru ...passed 00:26:02.251 Test: blockdev copy ...passed 00:26:02.251 00:26:02.251 Run Summary: Type Total Ran Passed Failed Inactive 00:26:02.251 suites 4 4 n/a 0 0 00:26:02.251 tests 92 92 92 0 0 00:26:02.251 asserts 520 520 520 0 n/a 00:26:02.251 00:26:02.251 Elapsed time = 0.507 seconds 00:26:02.251 0 00:26:02.251 18:29:10 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 2339304 00:26:02.251 18:29:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 2339304 ']' 00:26:02.251 18:29:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 2339304 00:26:02.251 18:29:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:26:02.251 18:29:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:02.251 18:29:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2339304 00:26:02.251 18:29:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:02.251 18:29:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:02.251 18:29:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2339304' 00:26:02.251 killing process with pid 2339304 00:26:02.251 18:29:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@969 -- # kill 2339304 00:26:02.251 18:29:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@974 -- # wait 2339304 00:26:02.509 18:29:11 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:26:02.509 00:26:02.509 real 0m3.372s 00:26:02.509 user 0m9.420s 00:26:02.509 sys 0m0.497s 00:26:02.509 18:29:11 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:02.509 18:29:11 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:02.509 ************************************ 00:26:02.509 END TEST bdev_bounds 00:26:02.509 ************************************ 00:26:02.509 18:29:11 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:26:02.509 18:29:11 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:02.509 18:29:11 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:02.509 18:29:11 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:02.768 ************************************ 00:26:02.768 START TEST bdev_nbd 00:26:02.768 ************************************ 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=2339971 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 2339971 /var/tmp/spdk-nbd.sock 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 2339971 ']' 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:02.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:02.768 18:29:11 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:02.768 [2024-07-24 18:29:11.199115] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:26:02.769 [2024-07-24 18:29:11.199156] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:01.0 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:01.1 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:01.2 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:01.3 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:01.4 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:01.5 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:01.6 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:01.7 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:02.0 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:02.1 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:02.2 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:02.3 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:02.4 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:02.5 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:02.6 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b3:02.7 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:01.0 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:01.1 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:01.2 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:01.3 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:01.4 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:01.5 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:01.6 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:01.7 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:02.0 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:02.1 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:02.2 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:02.3 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:02.4 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:02.5 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:02.6 cannot be used 00:26:02.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:02.769 EAL: Requested device 0000:b5:02.7 cannot be used 00:26:02.769 [2024-07-24 18:29:11.292651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.027 [2024-07-24 18:29:11.367305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.027 [2024-07-24 18:29:11.388238] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:26:03.027 [2024-07-24 18:29:11.396258] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:26:03.027 [2024-07-24 18:29:11.404277] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:26:03.027 [2024-07-24 18:29:11.497672] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:05.565 [2024-07-24 18:29:13.664861] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:05.565 [2024-07-24 18:29:13.664911] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:05.565 [2024-07-24 18:29:13.664921] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:05.565 [2024-07-24 18:29:13.672880] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:05.565 [2024-07-24 18:29:13.672891] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:05.565 [2024-07-24 18:29:13.672898] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:05.565 [2024-07-24 18:29:13.680900] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:05.565 [2024-07-24 18:29:13.680911] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:05.565 [2024-07-24 18:29:13.680918] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:05.565 [2024-07-24 18:29:13.688920] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:05.565 [2024-07-24 18:29:13.688931] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:05.565 [2024-07-24 18:29:13.688938] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:05.565 18:29:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:05.565 1+0 records in 00:26:05.565 1+0 records out 00:26:05.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288322 s, 14.2 MB/s 00:26:05.566 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:05.566 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:05.566 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:05.566 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:05.566 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:05.566 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:05.566 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:26:05.566 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:05.825 1+0 records in 00:26:05.825 1+0 records out 00:26:05.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274994 s, 14.9 MB/s 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:26:05.825 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:06.085 1+0 records in 00:26:06.085 1+0 records out 00:26:06.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299542 s, 13.7 MB/s 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:06.085 1+0 records in 00:26:06.085 1+0 records out 00:26:06.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330666 s, 12.4 MB/s 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:26:06.085 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:06.344 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:06.344 { 00:26:06.344 "nbd_device": "/dev/nbd0", 00:26:06.344 "bdev_name": "crypto_ram" 00:26:06.344 }, 00:26:06.344 { 00:26:06.344 "nbd_device": "/dev/nbd1", 00:26:06.344 "bdev_name": "crypto_ram2" 00:26:06.344 }, 00:26:06.344 { 00:26:06.344 "nbd_device": "/dev/nbd2", 00:26:06.344 "bdev_name": "crypto_ram3" 00:26:06.344 }, 00:26:06.344 { 00:26:06.344 "nbd_device": "/dev/nbd3", 00:26:06.344 "bdev_name": "crypto_ram4" 00:26:06.344 } 00:26:06.344 ]' 00:26:06.344 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:06.344 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:06.344 { 00:26:06.344 "nbd_device": "/dev/nbd0", 00:26:06.344 "bdev_name": "crypto_ram" 00:26:06.344 }, 00:26:06.344 { 00:26:06.344 "nbd_device": "/dev/nbd1", 00:26:06.344 "bdev_name": "crypto_ram2" 00:26:06.344 }, 00:26:06.344 { 00:26:06.344 "nbd_device": "/dev/nbd2", 00:26:06.344 "bdev_name": "crypto_ram3" 00:26:06.344 }, 00:26:06.344 { 00:26:06.344 "nbd_device": "/dev/nbd3", 00:26:06.344 "bdev_name": "crypto_ram4" 00:26:06.344 } 00:26:06.344 ]' 00:26:06.344 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:06.344 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:26:06.344 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:06.344 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:26:06.344 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:06.344 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:06.344 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:06.344 18:29:14 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:06.603 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:06.603 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:06.603 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:06.603 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:06.603 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:06.603 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:06.603 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:06.603 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:06.603 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:06.603 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:06.862 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:26:07.122 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:26:07.122 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:26:07.122 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:26:07.122 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:07.122 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:07.122 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:26:07.122 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:07.122 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:07.122 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:07.122 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:07.122 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:07.381 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:07.381 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:07.381 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:07.381 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:07.381 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:07.381 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:07.381 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:07.381 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:07.381 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:07.381 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:26:07.381 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:07.381 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:26:07.382 18:29:15 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:26:07.641 /dev/nbd0 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:07.641 1+0 records in 00:26:07.641 1+0 records out 00:26:07.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263643 s, 15.5 MB/s 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:26:07.641 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:26:07.900 /dev/nbd1 00:26:07.900 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:07.900 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:07.900 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:07.900 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:07.900 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:07.900 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:07.900 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:07.900 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:07.900 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:07.901 1+0 records in 00:26:07.901 1+0 records out 00:26:07.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286918 s, 14.3 MB/s 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:26:07.901 /dev/nbd10 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:07.901 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:08.160 1+0 records in 00:26:08.160 1+0 records out 00:26:08.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270739 s, 15.1 MB/s 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:26:08.160 /dev/nbd11 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:08.160 1+0 records in 00:26:08.160 1+0 records out 00:26:08.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314707 s, 13.0 MB/s 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:08.160 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:08.161 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:08.420 { 00:26:08.420 "nbd_device": "/dev/nbd0", 00:26:08.420 "bdev_name": "crypto_ram" 00:26:08.420 }, 00:26:08.420 { 00:26:08.420 "nbd_device": "/dev/nbd1", 00:26:08.420 "bdev_name": "crypto_ram2" 00:26:08.420 }, 00:26:08.420 { 00:26:08.420 "nbd_device": "/dev/nbd10", 00:26:08.420 "bdev_name": "crypto_ram3" 00:26:08.420 }, 00:26:08.420 { 00:26:08.420 "nbd_device": "/dev/nbd11", 00:26:08.420 "bdev_name": "crypto_ram4" 00:26:08.420 } 00:26:08.420 ]' 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:08.420 { 00:26:08.420 "nbd_device": "/dev/nbd0", 00:26:08.420 "bdev_name": "crypto_ram" 00:26:08.420 }, 00:26:08.420 { 00:26:08.420 "nbd_device": "/dev/nbd1", 00:26:08.420 "bdev_name": "crypto_ram2" 00:26:08.420 }, 00:26:08.420 { 00:26:08.420 "nbd_device": "/dev/nbd10", 00:26:08.420 "bdev_name": "crypto_ram3" 00:26:08.420 }, 00:26:08.420 { 00:26:08.420 "nbd_device": "/dev/nbd11", 00:26:08.420 "bdev_name": "crypto_ram4" 00:26:08.420 } 00:26:08.420 ]' 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:08.420 /dev/nbd1 00:26:08.420 /dev/nbd10 00:26:08.420 /dev/nbd11' 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:08.420 /dev/nbd1 00:26:08.420 /dev/nbd10 00:26:08.420 /dev/nbd11' 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:08.420 256+0 records in 00:26:08.420 256+0 records out 00:26:08.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109632 s, 95.6 MB/s 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:08.420 18:29:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:08.679 256+0 records in 00:26:08.679 256+0 records out 00:26:08.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0367798 s, 28.5 MB/s 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:08.679 256+0 records in 00:26:08.679 256+0 records out 00:26:08.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0435491 s, 24.1 MB/s 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:26:08.679 256+0 records in 00:26:08.679 256+0 records out 00:26:08.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.038851 s, 27.0 MB/s 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:26:08.679 256+0 records in 00:26:08.679 256+0 records out 00:26:08.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0379024 s, 27.7 MB/s 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:08.679 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:08.938 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:08.938 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:08.938 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:08.938 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:08.938 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:08.938 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:08.938 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:08.938 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:08.938 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:08.938 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:09.197 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:09.456 18:29:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:09.715 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:09.715 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:09.715 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:09.715 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:26:09.716 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:09.975 malloc_lvol_verify 00:26:09.975 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:09.975 670ba73d-03b7-48c7-9757-1974c824beff 00:26:09.975 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:10.234 bf3c051f-54ab-48fa-9f54-82b597b79bc2 00:26:10.234 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:10.493 /dev/nbd0 00:26:10.493 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:26:10.493 mke2fs 1.46.5 (30-Dec-2021) 00:26:10.493 Discarding device blocks: 0/4096 done 00:26:10.493 Creating filesystem with 4096 1k blocks and 1024 inodes 00:26:10.493 00:26:10.493 Allocating group tables: 0/1 done 00:26:10.493 Writing inode tables: 0/1 done 00:26:10.493 Creating journal (1024 blocks): done 00:26:10.493 Writing superblocks and filesystem accounting information: 0/1 done 00:26:10.493 00:26:10.493 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:26:10.493 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:10.493 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:10.493 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:10.493 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:10.494 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:10.494 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:10.494 18:29:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 2339971 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 2339971 ']' 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 2339971 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2339971 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2339971' 00:26:10.753 killing process with pid 2339971 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@969 -- # kill 2339971 00:26:10.753 18:29:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@974 -- # wait 2339971 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:26:11.012 00:26:11.012 real 0m8.371s 00:26:11.012 user 0m10.526s 00:26:11.012 sys 0m3.215s 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:11.012 ************************************ 00:26:11.012 END TEST bdev_nbd 00:26:11.012 ************************************ 00:26:11.012 18:29:19 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:26:11.012 18:29:19 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = nvme ']' 00:26:11.012 18:29:19 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = gpt ']' 00:26:11.012 18:29:19 blockdev_crypto_aesni -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:26:11.012 18:29:19 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:11.012 18:29:19 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:11.012 18:29:19 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:11.012 ************************************ 00:26:11.012 START TEST bdev_fio 00:26:11.012 ************************************ 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:26:11.012 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:11.012 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram4]' 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram4 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:11.272 ************************************ 00:26:11.272 START TEST bdev_fio_rw_verify 00:26:11.272 ************************************ 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:11.272 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:11.273 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:11.273 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:11.273 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:11.273 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:11.273 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:11.273 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:11.273 18:29:19 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:11.531 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:11.531 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:11.531 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:11.531 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:11.531 fio-3.35 00:26:11.531 Starting 4 threads 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:01.0 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:01.1 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:01.2 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:01.3 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:01.4 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:01.5 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:01.6 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:01.7 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:02.0 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:02.1 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:02.2 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:02.3 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:02.4 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:02.5 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:02.6 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b3:02.7 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:01.0 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:01.1 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:01.2 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:01.3 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:01.4 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:01.5 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:01.6 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:01.7 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:02.0 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:02.1 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:02.2 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:02.3 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:02.4 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:02.5 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:02.6 cannot be used 00:26:11.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.790 EAL: Requested device 0000:b5:02.7 cannot be used 00:26:26.743 00:26:26.743 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2342241: Wed Jul 24 18:29:32 2024 00:26:26.743 read: IOPS=32.5k, BW=127MiB/s (133MB/s)(1271MiB/10001msec) 00:26:26.743 slat (usec): min=11, max=409, avg=41.46, stdev=29.12 00:26:26.743 clat (usec): min=8, max=2165, avg=219.88, stdev=159.80 00:26:26.743 lat (usec): min=31, max=2330, avg=261.34, stdev=177.95 00:26:26.743 clat percentiles (usec): 00:26:26.743 | 50.000th=[ 180], 99.000th=[ 848], 99.900th=[ 1057], 99.990th=[ 1205], 00:26:26.743 | 99.999th=[ 1975] 00:26:26.743 write: IOPS=35.6k, BW=139MiB/s (146MB/s)(1358MiB/9759msec); 0 zone resets 00:26:26.743 slat (usec): min=14, max=1382, avg=49.82, stdev=28.94 00:26:26.743 clat (usec): min=23, max=2593, avg=266.48, stdev=185.96 00:26:26.743 lat (usec): min=52, max=2895, avg=316.30, stdev=203.68 00:26:26.743 clat percentiles (usec): 00:26:26.743 | 50.000th=[ 229], 99.000th=[ 955], 99.900th=[ 1237], 99.990th=[ 1418], 00:26:26.743 | 99.999th=[ 2474] 00:26:26.743 bw ( KiB/s): min=111104, max=174864, per=98.19%, avg=139869.79, stdev=5257.68, samples=76 00:26:26.743 iops : min=27772, max=43716, avg=34967.37, stdev=1314.41, samples=76 00:26:26.743 lat (usec) : 10=0.01%, 20=0.01%, 50=3.73%, 100=12.81%, 250=47.73% 00:26:26.743 lat (usec) : 500=27.28%, 750=6.04%, 1000=1.85% 00:26:26.743 lat (msec) : 2=0.55%, 4=0.01% 00:26:26.743 cpu : usr=99.70%, sys=0.00%, ctx=62, majf=0, minf=239 00:26:26.743 IO depths : 1=10.4%, 2=25.5%, 4=51.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:26.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.743 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.743 issued rwts: total=325454,347533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.743 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:26.743 00:26:26.743 Run status group 0 (all jobs): 00:26:26.743 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=1271MiB (1333MB), run=10001-10001msec 00:26:26.743 WRITE: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=1358MiB (1423MB), run=9759-9759msec 00:26:26.743 00:26:26.743 real 0m13.335s 00:26:26.743 user 0m51.058s 00:26:26.743 sys 0m0.463s 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:26:26.743 ************************************ 00:26:26.743 END TEST bdev_fio_rw_verify 00:26:26.743 ************************************ 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:26:26.743 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "c86be9f0-669a-5df6-945c-fa58dff6ed7e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c86be9f0-669a-5df6-945c-fa58dff6ed7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "c80677b2-a282-5aff-a1dc-45746f010739"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c80677b2-a282-5aff-a1dc-45746f010739",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "fd4f5915-fba9-5158-9040-a9a2e160e61f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "fd4f5915-fba9-5158-9040-a9a2e160e61f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "a88bcc4d-9295-5420-ac7a-c245ac860a87"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "a88bcc4d-9295-5420-ac7a-c245ac860a87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:26:26.744 crypto_ram2 00:26:26.744 crypto_ram3 00:26:26.744 crypto_ram4 ]] 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "c86be9f0-669a-5df6-945c-fa58dff6ed7e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c86be9f0-669a-5df6-945c-fa58dff6ed7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "c80677b2-a282-5aff-a1dc-45746f010739"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c80677b2-a282-5aff-a1dc-45746f010739",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "fd4f5915-fba9-5158-9040-a9a2e160e61f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "fd4f5915-fba9-5158-9040-a9a2e160e61f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "a88bcc4d-9295-5420-ac7a-c245ac860a87"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "a88bcc4d-9295-5420-ac7a-c245ac860a87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram4]' 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram4 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:26.744 ************************************ 00:26:26.744 START TEST bdev_fio_trim 00:26:26.744 ************************************ 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:26.744 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:26.745 18:29:33 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:26.745 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:26.745 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:26.745 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:26.745 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:26.745 fio-3.35 00:26:26.745 Starting 4 threads 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:01.0 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:01.1 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:01.2 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:01.3 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:01.4 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:01.5 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:01.6 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:01.7 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:02.0 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:02.1 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:02.2 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:02.3 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:02.4 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:02.5 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:02.6 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b3:02.7 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:01.0 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:01.1 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:01.2 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:01.3 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:01.4 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:01.5 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:01.6 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:01.7 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:02.0 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:02.1 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:02.2 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:02.3 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:02.4 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:02.5 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:02.6 cannot be used 00:26:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.745 EAL: Requested device 0000:b5:02.7 cannot be used 00:26:38.961 00:26:38.961 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2344532: Wed Jul 24 18:29:46 2024 00:26:38.961 write: IOPS=48.9k, BW=191MiB/s (200MB/s)(1910MiB/10001msec); 0 zone resets 00:26:38.961 slat (usec): min=11, max=221, avg=46.32, stdev=22.73 00:26:38.961 clat (usec): min=16, max=1031, avg=208.66, stdev=122.27 00:26:38.961 lat (usec): min=36, max=1209, avg=254.97, stdev=135.18 00:26:38.961 clat percentiles (usec): 00:26:38.961 | 50.000th=[ 180], 99.000th=[ 594], 99.900th=[ 709], 99.990th=[ 775], 00:26:38.961 | 99.999th=[ 963] 00:26:38.961 bw ( KiB/s): min=185712, max=270144, per=100.00%, avg=196162.95, stdev=5955.13, samples=76 00:26:38.961 iops : min=46428, max=67536, avg=49040.74, stdev=1488.78, samples=76 00:26:38.961 trim: IOPS=48.9k, BW=191MiB/s (200MB/s)(1910MiB/10001msec); 0 zone resets 00:26:38.961 slat (usec): min=4, max=1056, avg=13.33, stdev= 5.60 00:26:38.961 clat (usec): min=36, max=898, avg=196.71, stdev=86.28 00:26:38.961 lat (usec): min=41, max=1314, avg=210.04, stdev=87.73 00:26:38.961 clat percentiles (usec): 00:26:38.961 | 50.000th=[ 186], 99.000th=[ 429], 99.900th=[ 486], 99.990th=[ 594], 00:26:38.961 | 99.999th=[ 799] 00:26:38.961 bw ( KiB/s): min=185712, max=270160, per=100.00%, avg=196164.21, stdev=5955.67, samples=76 00:26:38.961 iops : min=46428, max=67540, avg=49041.05, stdev=1488.92, samples=76 00:26:38.961 lat (usec) : 20=0.01%, 50=1.52%, 100=12.71%, 250=58.90%, 500=25.06% 00:26:38.961 lat (usec) : 750=1.80%, 1000=0.01% 00:26:38.961 lat (msec) : 2=0.01% 00:26:38.961 cpu : usr=99.69%, sys=0.00%, ctx=55, majf=0, minf=94 00:26:38.961 IO depths : 1=7.5%, 2=26.4%, 4=52.8%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.961 complete : 0=0.0%, 4=88.3%, 8=11.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.961 issued rwts: total=0,489056,489057,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.961 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.961 00:26:38.961 Run status group 0 (all jobs): 00:26:38.961 WRITE: bw=191MiB/s (200MB/s), 191MiB/s-191MiB/s (200MB/s-200MB/s), io=1910MiB (2003MB), run=10001-10001msec 00:26:38.961 TRIM: bw=191MiB/s (200MB/s), 191MiB/s-191MiB/s (200MB/s-200MB/s), io=1910MiB (2003MB), run=10001-10001msec 00:26:38.961 00:26:38.961 real 0m13.346s 00:26:38.961 user 0m50.779s 00:26:38.961 sys 0m0.482s 00:26:38.961 18:29:46 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.961 18:29:46 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:26:38.961 ************************************ 00:26:38.961 END TEST bdev_fio_trim 00:26:38.961 ************************************ 00:26:38.961 18:29:46 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:26:38.961 18:29:46 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:38.961 18:29:46 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:26:38.961 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:38.961 18:29:46 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:26:38.961 00:26:38.961 real 0m27.008s 00:26:38.961 user 1m42.014s 00:26:38.961 sys 0m1.114s 00:26:38.961 18:29:46 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.961 18:29:46 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:38.961 ************************************ 00:26:38.961 END TEST bdev_fio 00:26:38.961 ************************************ 00:26:38.961 18:29:46 blockdev_crypto_aesni -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:38.961 18:29:46 blockdev_crypto_aesni -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:38.961 18:29:46 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:26:38.961 18:29:46 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:38.961 18:29:46 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:38.961 ************************************ 00:26:38.961 START TEST bdev_verify 00:26:38.961 ************************************ 00:26:38.961 18:29:46 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:38.961 [2024-07-24 18:29:46.728431] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:26:38.961 [2024-07-24 18:29:46.728469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2346411 ] 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:01.0 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:01.1 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:01.2 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:01.3 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:01.4 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:01.5 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:01.6 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:01.7 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:02.0 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:02.1 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:02.2 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:02.3 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:02.4 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:02.5 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:02.6 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b3:02.7 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b5:01.0 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b5:01.1 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b5:01.2 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b5:01.3 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b5:01.4 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b5:01.5 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b5:01.6 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b5:01.7 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b5:02.0 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b5:02.1 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.961 EAL: Requested device 0000:b5:02.2 cannot be used 00:26:38.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.962 EAL: Requested device 0000:b5:02.3 cannot be used 00:26:38.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.962 EAL: Requested device 0000:b5:02.4 cannot be used 00:26:38.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.962 EAL: Requested device 0000:b5:02.5 cannot be used 00:26:38.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.962 EAL: Requested device 0000:b5:02.6 cannot be used 00:26:38.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.962 EAL: Requested device 0000:b5:02.7 cannot be used 00:26:38.962 [2024-07-24 18:29:46.817141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:38.962 [2024-07-24 18:29:46.886873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.962 [2024-07-24 18:29:46.886877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.962 [2024-07-24 18:29:46.907874] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:26:38.962 [2024-07-24 18:29:46.915902] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:26:38.962 [2024-07-24 18:29:46.923925] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:26:38.962 [2024-07-24 18:29:47.024150] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:40.869 [2024-07-24 18:29:49.187019] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:40.869 [2024-07-24 18:29:49.187086] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:40.869 [2024-07-24 18:29:49.187096] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:40.869 [2024-07-24 18:29:49.195038] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:40.869 [2024-07-24 18:29:49.195051] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:40.869 [2024-07-24 18:29:49.195059] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:40.869 [2024-07-24 18:29:49.203060] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:40.869 [2024-07-24 18:29:49.203073] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:40.869 [2024-07-24 18:29:49.203080] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:40.869 [2024-07-24 18:29:49.211080] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:40.869 [2024-07-24 18:29:49.211092] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:40.869 [2024-07-24 18:29:49.211099] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:40.869 Running I/O for 5 seconds... 00:26:46.142 00:26:46.142 Latency(us) 00:26:46.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.142 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:46.142 Verification LBA range: start 0x0 length 0x1000 00:26:46.142 crypto_ram : 5.05 734.94 2.87 0.00 0.00 173903.23 2936.01 115762.79 00:26:46.142 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:46.142 Verification LBA range: start 0x1000 length 0x1000 00:26:46.142 crypto_ram : 5.04 735.96 2.87 0.00 0.00 173637.21 8178.89 114923.93 00:26:46.142 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:46.142 Verification LBA range: start 0x0 length 0x1000 00:26:46.142 crypto_ram2 : 5.05 734.84 2.87 0.00 0.00 173581.49 3053.98 107374.18 00:26:46.142 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:46.142 Verification LBA range: start 0x1000 length 0x1000 00:26:46.142 crypto_ram2 : 5.05 735.65 2.87 0.00 0.00 173308.02 8388.61 106954.75 00:26:46.142 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:46.142 Verification LBA range: start 0x0 length 0x1000 00:26:46.142 crypto_ram3 : 5.04 5751.87 22.47 0.00 0.00 22091.34 2660.76 19398.66 00:26:46.142 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:46.142 Verification LBA range: start 0x1000 length 0x1000 00:26:46.142 crypto_ram3 : 5.04 5795.28 22.64 0.00 0.00 21940.70 5295.31 19293.80 00:26:46.142 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:46.142 Verification LBA range: start 0x0 length 0x1000 00:26:46.142 crypto_ram4 : 5.04 5760.99 22.50 0.00 0.00 22028.18 2411.72 16986.93 00:26:46.142 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:46.142 Verification LBA range: start 0x1000 length 0x1000 00:26:46.142 crypto_ram4 : 5.04 5802.99 22.67 0.00 0.00 21865.14 593.10 16777.22 00:26:46.142 =================================================================================================================== 00:26:46.142 Total : 26052.51 101.77 0.00 0.00 39125.82 593.10 115762.79 00:26:46.142 00:26:46.142 real 0m8.002s 00:26:46.142 user 0m15.329s 00:26:46.142 sys 0m0.319s 00:26:46.142 18:29:54 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:46.142 18:29:54 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:26:46.142 ************************************ 00:26:46.142 END TEST bdev_verify 00:26:46.142 ************************************ 00:26:46.142 18:29:54 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:46.142 18:29:54 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:26:46.142 18:29:54 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:46.142 18:29:54 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:46.400 ************************************ 00:26:46.400 START TEST bdev_verify_big_io 00:26:46.400 ************************************ 00:26:46.400 18:29:54 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:46.401 [2024-07-24 18:29:54.804858] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:26:46.401 [2024-07-24 18:29:54.804897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2347756 ] 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:01.0 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:01.1 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:01.2 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:01.3 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:01.4 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:01.5 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:01.6 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:01.7 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:02.0 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:02.1 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:02.2 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:02.3 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:02.4 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:02.5 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:02.6 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b3:02.7 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:01.0 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:01.1 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:01.2 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:01.3 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:01.4 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:01.5 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:01.6 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:01.7 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:02.0 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:02.1 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:02.2 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:02.3 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:02.4 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:02.5 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:02.6 cannot be used 00:26:46.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.401 EAL: Requested device 0000:b5:02.7 cannot be used 00:26:46.401 [2024-07-24 18:29:54.893194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:46.401 [2024-07-24 18:29:54.963107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.401 [2024-07-24 18:29:54.963120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.401 [2024-07-24 18:29:54.984485] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:26:46.401 [2024-07-24 18:29:54.992510] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:26:46.660 [2024-07-24 18:29:55.000530] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:26:46.660 [2024-07-24 18:29:55.095211] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:49.194 [2024-07-24 18:29:57.260655] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:49.194 [2024-07-24 18:29:57.260720] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:49.194 [2024-07-24 18:29:57.260731] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:49.194 [2024-07-24 18:29:57.268671] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:49.194 [2024-07-24 18:29:57.268684] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:49.194 [2024-07-24 18:29:57.268692] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:49.194 [2024-07-24 18:29:57.276689] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:49.194 [2024-07-24 18:29:57.276701] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:49.194 [2024-07-24 18:29:57.276709] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:49.194 [2024-07-24 18:29:57.284712] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:49.194 [2024-07-24 18:29:57.284731] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:49.194 [2024-07-24 18:29:57.284739] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:49.194 Running I/O for 5 seconds... 00:26:54.481 00:26:54.481 Latency(us) 00:26:54.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.481 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:54.481 Verification LBA range: start 0x0 length 0x100 00:26:54.481 crypto_ram : 5.52 68.85 4.30 0.00 0.00 1819275.22 52219.08 1637456.28 00:26:54.481 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:54.481 Verification LBA range: start 0x100 length 0x100 00:26:54.481 crypto_ram : 5.52 68.83 4.30 0.00 0.00 1819978.32 52009.37 1644167.17 00:26:54.481 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:54.481 Verification LBA range: start 0x0 length 0x100 00:26:54.481 crypto_ram2 : 5.52 68.84 4.30 0.00 0.00 1781635.76 52009.37 1644167.17 00:26:54.481 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:54.481 Verification LBA range: start 0x100 length 0x100 00:26:54.481 crypto_ram2 : 5.52 68.82 4.30 0.00 0.00 1782135.76 51799.65 1644167.17 00:26:54.481 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:54.481 Verification LBA range: start 0x0 length 0x100 00:26:54.481 crypto_ram3 : 5.36 465.29 29.08 0.00 0.00 257013.74 6868.17 362387.87 00:26:54.481 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:54.481 Verification LBA range: start 0x100 length 0x100 00:26:54.481 crypto_ram3 : 5.36 464.50 29.03 0.00 0.00 257394.86 5819.60 364065.59 00:26:54.481 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:54.481 Verification LBA range: start 0x0 length 0x100 00:26:54.481 crypto_ram4 : 5.40 478.92 29.93 0.00 0.00 245763.41 12740.20 327155.71 00:26:54.481 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:54.481 Verification LBA range: start 0x100 length 0x100 00:26:54.481 crypto_ram4 : 5.40 478.24 29.89 0.00 0.00 245908.87 13526.63 327155.71 00:26:54.481 =================================================================================================================== 00:26:54.481 Total : 2162.30 135.14 0.00 0.00 453302.15 5819.60 1644167.17 00:26:54.740 00:26:54.740 real 0m8.485s 00:26:54.741 user 0m16.301s 00:26:54.741 sys 0m0.314s 00:26:54.741 18:30:03 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:54.741 18:30:03 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:26:54.741 ************************************ 00:26:54.741 END TEST bdev_verify_big_io 00:26:54.741 ************************************ 00:26:54.741 18:30:03 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:54.741 18:30:03 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:26:54.741 18:30:03 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:54.741 18:30:03 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:54.741 ************************************ 00:26:54.741 START TEST bdev_write_zeroes 00:26:54.741 ************************************ 00:26:54.741 18:30:03 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:54.999 [2024-07-24 18:30:03.371596] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:26:54.999 [2024-07-24 18:30:03.371646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2349224 ] 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:01.0 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:01.1 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:01.2 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:01.3 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:01.4 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:01.5 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:01.6 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:01.7 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:02.0 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:02.1 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:02.2 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:02.3 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:02.4 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:02.5 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:02.6 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b3:02.7 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:01.0 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:01.1 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:01.2 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:01.3 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:01.4 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:01.5 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:01.6 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:01.7 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:02.0 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:02.1 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:02.2 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:02.3 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:02.4 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:02.5 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:02.6 cannot be used 00:26:54.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.999 EAL: Requested device 0000:b5:02.7 cannot be used 00:26:54.999 [2024-07-24 18:30:03.462472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.999 [2024-07-24 18:30:03.532647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.999 [2024-07-24 18:30:03.553581] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:26:54.999 [2024-07-24 18:30:03.561605] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:26:54.999 [2024-07-24 18:30:03.569629] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:26:55.258 [2024-07-24 18:30:03.675088] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:57.834 [2024-07-24 18:30:05.845155] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:57.834 [2024-07-24 18:30:05.845209] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:57.834 [2024-07-24 18:30:05.845220] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:57.834 [2024-07-24 18:30:05.853174] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:57.834 [2024-07-24 18:30:05.853186] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:57.834 [2024-07-24 18:30:05.853193] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:57.835 [2024-07-24 18:30:05.861194] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:57.835 [2024-07-24 18:30:05.861205] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:57.835 [2024-07-24 18:30:05.861213] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:57.835 [2024-07-24 18:30:05.869214] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:57.835 [2024-07-24 18:30:05.869225] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:57.835 [2024-07-24 18:30:05.869232] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:57.835 Running I/O for 1 seconds... 00:26:58.404 00:26:58.404 Latency(us) 00:26:58.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.404 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:58.404 crypto_ram : 1.02 2997.65 11.71 0.00 0.00 42509.36 3512.73 50751.08 00:26:58.404 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:58.404 crypto_ram2 : 1.02 3003.47 11.73 0.00 0.00 42277.50 3486.52 47395.64 00:26:58.404 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:58.404 crypto_ram3 : 1.01 23361.73 91.26 0.00 0.00 5424.89 1605.63 6868.17 00:26:58.404 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:58.404 crypto_ram4 : 1.01 23346.36 91.20 0.00 0.00 5412.90 1605.63 5688.52 00:26:58.404 =================================================================================================================== 00:26:58.404 Total : 52709.21 205.90 0.00 0.00 9640.15 1605.63 50751.08 00:26:58.973 00:26:58.973 real 0m3.972s 00:26:58.973 user 0m3.617s 00:26:58.973 sys 0m0.309s 00:26:58.973 18:30:07 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:58.973 18:30:07 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:26:58.973 ************************************ 00:26:58.973 END TEST bdev_write_zeroes 00:26:58.973 ************************************ 00:26:58.973 18:30:07 blockdev_crypto_aesni -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:58.973 18:30:07 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:26:58.973 18:30:07 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:58.973 18:30:07 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:58.973 ************************************ 00:26:58.973 START TEST bdev_json_nonenclosed 00:26:58.973 ************************************ 00:26:58.973 18:30:07 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:58.973 [2024-07-24 18:30:07.423551] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:26:58.973 [2024-07-24 18:30:07.423592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2350031 ] 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:01.0 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:01.1 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:01.2 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:01.3 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:01.4 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:01.5 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:01.6 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:01.7 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:02.0 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:02.1 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:02.2 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:02.3 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:02.4 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:02.5 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:02.6 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b3:02.7 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:01.0 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:01.1 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:01.2 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:01.3 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:01.4 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:01.5 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:01.6 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:01.7 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:02.0 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:02.1 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:02.2 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:02.3 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:02.4 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:02.5 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:02.6 cannot be used 00:26:58.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.973 EAL: Requested device 0000:b5:02.7 cannot be used 00:26:58.973 [2024-07-24 18:30:07.516683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.232 [2024-07-24 18:30:07.587213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.232 [2024-07-24 18:30:07.587267] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:59.232 [2024-07-24 18:30:07.587295] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:59.232 [2024-07-24 18:30:07.587303] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:59.232 00:26:59.232 real 0m0.287s 00:26:59.232 user 0m0.165s 00:26:59.232 sys 0m0.121s 00:26:59.232 18:30:07 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:59.232 18:30:07 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:26:59.232 ************************************ 00:26:59.232 END TEST bdev_json_nonenclosed 00:26:59.232 ************************************ 00:26:59.232 18:30:07 blockdev_crypto_aesni -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:59.232 18:30:07 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:26:59.232 18:30:07 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:59.232 18:30:07 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:59.232 ************************************ 00:26:59.232 START TEST bdev_json_nonarray 00:26:59.232 ************************************ 00:26:59.232 18:30:07 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:59.232 [2024-07-24 18:30:07.773831] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:26:59.232 [2024-07-24 18:30:07.773873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2350201 ] 00:26:59.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.232 EAL: Requested device 0000:b3:01.0 cannot be used 00:26:59.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.232 EAL: Requested device 0000:b3:01.1 cannot be used 00:26:59.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.232 EAL: Requested device 0000:b3:01.2 cannot be used 00:26:59.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.232 EAL: Requested device 0000:b3:01.3 cannot be used 00:26:59.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.232 EAL: Requested device 0000:b3:01.4 cannot be used 00:26:59.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.232 EAL: Requested device 0000:b3:01.5 cannot be used 00:26:59.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.232 EAL: Requested device 0000:b3:01.6 cannot be used 00:26:59.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.232 EAL: Requested device 0000:b3:01.7 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b3:02.0 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b3:02.1 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b3:02.2 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b3:02.3 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b3:02.4 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b3:02.5 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b3:02.6 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b3:02.7 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:01.0 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:01.1 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:01.2 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:01.3 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:01.4 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:01.5 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:01.6 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:01.7 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:02.0 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:02.1 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:02.2 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:02.3 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:02.4 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:02.5 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:02.6 cannot be used 00:26:59.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:59.233 EAL: Requested device 0000:b5:02.7 cannot be used 00:26:59.492 [2024-07-24 18:30:07.858461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.492 [2024-07-24 18:30:07.928490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.492 [2024-07-24 18:30:07.928564] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:59.492 [2024-07-24 18:30:07.928576] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:59.492 [2024-07-24 18:30:07.928584] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:59.492 00:26:59.492 real 0m0.263s 00:26:59.492 user 0m0.161s 00:26:59.492 sys 0m0.101s 00:26:59.492 18:30:07 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:59.492 18:30:07 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:26:59.492 ************************************ 00:26:59.492 END TEST bdev_json_nonarray 00:26:59.492 ************************************ 00:26:59.492 18:30:08 blockdev_crypto_aesni -- bdev/blockdev.sh@786 -- # [[ crypto_aesni == bdev ]] 00:26:59.492 18:30:08 blockdev_crypto_aesni -- bdev/blockdev.sh@793 -- # [[ crypto_aesni == gpt ]] 00:26:59.492 18:30:08 blockdev_crypto_aesni -- bdev/blockdev.sh@797 -- # [[ crypto_aesni == crypto_sw ]] 00:26:59.492 18:30:08 blockdev_crypto_aesni -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:26:59.492 18:30:08 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # cleanup 00:26:59.492 18:30:08 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:26:59.492 18:30:08 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:26:59.492 18:30:08 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:26:59.492 18:30:08 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:26:59.492 18:30:08 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:26:59.492 18:30:08 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:26:59.492 00:26:59.492 real 1m7.501s 00:26:59.492 user 2m44.445s 00:26:59.492 sys 0m7.443s 00:26:59.492 18:30:08 blockdev_crypto_aesni -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:59.492 18:30:08 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:59.492 ************************************ 00:26:59.492 END TEST blockdev_crypto_aesni 00:26:59.492 ************************************ 00:26:59.751 18:30:08 -- spdk/autotest.sh@362 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:26:59.751 18:30:08 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:59.751 18:30:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:59.751 18:30:08 -- common/autotest_common.sh@10 -- # set +x 00:26:59.751 ************************************ 00:26:59.751 START TEST blockdev_crypto_sw 00:26:59.751 ************************************ 00:26:59.751 18:30:08 blockdev_crypto_sw -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:26:59.751 * Looking for test storage... 00:26:59.751 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # uname -s 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@681 -- # test_type=crypto_sw 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # crypto_device= 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # dek= 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # env_ctx= 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == bdev ]] 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == crypto_* ]] 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2350340 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:26:59.751 18:30:08 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 2350340 00:26:59.751 18:30:08 blockdev_crypto_sw -- common/autotest_common.sh@831 -- # '[' -z 2350340 ']' 00:26:59.751 18:30:08 blockdev_crypto_sw -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.751 18:30:08 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.751 18:30:08 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.751 18:30:08 blockdev_crypto_sw -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.751 18:30:08 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:59.751 [2024-07-24 18:30:08.312857] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:26:59.751 [2024-07-24 18:30:08.312911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2350340 ] 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:00.011 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:00.011 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:00.011 [2024-07-24 18:30:08.407481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.011 [2024-07-24 18:30:08.483872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.579 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:00.579 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@864 -- # return 0 00:27:00.579 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:27:00.579 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@710 -- # setup_crypto_sw_conf 00:27:00.579 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@192 -- # rpc_cmd 00:27:00.579 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.579 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:00.840 Malloc0 00:27:00.840 Malloc1 00:27:00.840 true 00:27:00.840 true 00:27:00.840 true 00:27:00.840 [2024-07-24 18:30:09.333690] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:00.840 crypto_ram 00:27:00.840 [2024-07-24 18:30:09.341716] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:00.840 crypto_ram2 00:27:00.840 [2024-07-24 18:30:09.349736] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:00.840 crypto_ram3 00:27:00.840 [ 00:27:00.840 { 00:27:00.840 "name": "Malloc1", 00:27:00.840 "aliases": [ 00:27:00.840 "19330665-bcb2-478a-9aaa-a29327320ba2" 00:27:00.840 ], 00:27:00.840 "product_name": "Malloc disk", 00:27:00.840 "block_size": 4096, 00:27:00.840 "num_blocks": 4096, 00:27:00.840 "uuid": "19330665-bcb2-478a-9aaa-a29327320ba2", 00:27:00.840 "assigned_rate_limits": { 00:27:00.840 "rw_ios_per_sec": 0, 00:27:00.840 "rw_mbytes_per_sec": 0, 00:27:00.840 "r_mbytes_per_sec": 0, 00:27:00.840 "w_mbytes_per_sec": 0 00:27:00.840 }, 00:27:00.840 "claimed": true, 00:27:00.840 "claim_type": "exclusive_write", 00:27:00.840 "zoned": false, 00:27:00.840 "supported_io_types": { 00:27:00.840 "read": true, 00:27:00.840 "write": true, 00:27:00.840 "unmap": true, 00:27:00.840 "flush": true, 00:27:00.840 "reset": true, 00:27:00.840 "nvme_admin": false, 00:27:00.840 "nvme_io": false, 00:27:00.840 "nvme_io_md": false, 00:27:00.840 "write_zeroes": true, 00:27:00.840 "zcopy": true, 00:27:00.840 "get_zone_info": false, 00:27:00.840 "zone_management": false, 00:27:00.840 "zone_append": false, 00:27:00.840 "compare": false, 00:27:00.840 "compare_and_write": false, 00:27:00.840 "abort": true, 00:27:00.840 "seek_hole": false, 00:27:00.840 "seek_data": false, 00:27:00.840 "copy": true, 00:27:00.840 "nvme_iov_md": false 00:27:00.840 }, 00:27:00.840 "memory_domains": [ 00:27:00.840 { 00:27:00.840 "dma_device_id": "system", 00:27:00.840 "dma_device_type": 1 00:27:00.840 }, 00:27:00.840 { 00:27:00.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:00.840 "dma_device_type": 2 00:27:00.840 } 00:27:00.840 ], 00:27:00.840 "driver_specific": {} 00:27:00.840 } 00:27:00.840 ] 00:27:00.840 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.840 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:27:00.840 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.840 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:00.840 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.840 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # cat 00:27:00.840 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:27:00.840 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.840 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:00.840 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.840 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:27:00.840 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.840 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.099 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.099 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:27:01.099 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:27:01.099 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.099 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:27:01.099 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r .name 00:27:01.099 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "bc36e49f-6bbc-590e-827b-6ab480a681b9"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "bc36e49f-6bbc-590e-827b-6ab480a681b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "e7c6ee0d-449b-538c-b6fe-d8516d06cbfd"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "e7c6ee0d-449b-538c-b6fe-d8516d06cbfd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:27:01.099 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:27:01.099 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:27:01.099 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:27:01.099 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # killprocess 2350340 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@950 -- # '[' -z 2350340 ']' 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # kill -0 2350340 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # uname 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2350340 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2350340' 00:27:01.099 killing process with pid 2350340 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@969 -- # kill 2350340 00:27:01.099 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@974 -- # wait 2350340 00:27:01.359 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:01.359 18:30:09 blockdev_crypto_sw -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:27:01.359 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:01.359 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:01.359 18:30:09 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:01.359 ************************************ 00:27:01.359 START TEST bdev_hello_world 00:27:01.359 ************************************ 00:27:01.359 18:30:09 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:27:01.619 [2024-07-24 18:30:10.003051] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:27:01.619 [2024-07-24 18:30:10.003094] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2350840 ] 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:01.619 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:01.619 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:01.619 [2024-07-24 18:30:10.098525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.619 [2024-07-24 18:30:10.167785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.877 [2024-07-24 18:30:10.327620] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:01.877 [2024-07-24 18:30:10.327680] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:01.877 [2024-07-24 18:30:10.327706] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:01.877 [2024-07-24 18:30:10.335645] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:01.877 [2024-07-24 18:30:10.335658] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:01.877 [2024-07-24 18:30:10.335666] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:01.877 [2024-07-24 18:30:10.343662] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:01.877 [2024-07-24 18:30:10.343674] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:01.877 [2024-07-24 18:30:10.343681] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:01.877 [2024-07-24 18:30:10.381988] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:01.877 [2024-07-24 18:30:10.382012] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:27:01.877 [2024-07-24 18:30:10.382023] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:01.877 [2024-07-24 18:30:10.383221] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:01.877 [2024-07-24 18:30:10.383274] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:01.877 [2024-07-24 18:30:10.383285] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:01.877 [2024-07-24 18:30:10.383308] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:01.877 00:27:01.877 [2024-07-24 18:30:10.383319] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:02.137 00:27:02.137 real 0m0.602s 00:27:02.137 user 0m0.397s 00:27:02.137 sys 0m0.186s 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:27:02.137 ************************************ 00:27:02.137 END TEST bdev_hello_world 00:27:02.137 ************************************ 00:27:02.137 18:30:10 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:27:02.137 18:30:10 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:02.137 18:30:10 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:02.137 18:30:10 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:02.137 ************************************ 00:27:02.137 START TEST bdev_bounds 00:27:02.137 ************************************ 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=2351105 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 2351105' 00:27:02.137 Process bdevio pid: 2351105 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 2351105 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 2351105 ']' 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:02.137 18:30:10 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:02.137 [2024-07-24 18:30:10.678648] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:27:02.137 [2024-07-24 18:30:10.678688] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2351105 ] 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:02.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:02.137 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:02.396 [2024-07-24 18:30:10.772183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:02.396 [2024-07-24 18:30:10.848115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.396 [2024-07-24 18:30:10.848213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:02.396 [2024-07-24 18:30:10.848215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.655 [2024-07-24 18:30:11.002883] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:02.655 [2024-07-24 18:30:11.002932] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:02.655 [2024-07-24 18:30:11.002942] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:02.655 [2024-07-24 18:30:11.010904] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:02.655 [2024-07-24 18:30:11.010917] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:02.655 [2024-07-24 18:30:11.010924] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:02.655 [2024-07-24 18:30:11.018929] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:02.655 [2024-07-24 18:30:11.018941] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:02.655 [2024-07-24 18:30:11.018949] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:02.913 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:02.913 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:27:02.913 18:30:11 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:03.172 I/O targets: 00:27:03.172 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:27:03.172 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:27:03.172 00:27:03.172 00:27:03.172 CUnit - A unit testing framework for C - Version 2.1-3 00:27:03.172 http://cunit.sourceforge.net/ 00:27:03.172 00:27:03.172 00:27:03.172 Suite: bdevio tests on: crypto_ram3 00:27:03.172 Test: blockdev write read block ...passed 00:27:03.172 Test: blockdev write zeroes read block ...passed 00:27:03.172 Test: blockdev write zeroes read no split ...passed 00:27:03.172 Test: blockdev write zeroes read split ...passed 00:27:03.172 Test: blockdev write zeroes read split partial ...passed 00:27:03.172 Test: blockdev reset ...passed 00:27:03.172 Test: blockdev write read 8 blocks ...passed 00:27:03.172 Test: blockdev write read size > 128k ...passed 00:27:03.172 Test: blockdev write read invalid size ...passed 00:27:03.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:03.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:03.172 Test: blockdev write read max offset ...passed 00:27:03.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:03.172 Test: blockdev writev readv 8 blocks ...passed 00:27:03.172 Test: blockdev writev readv 30 x 1block ...passed 00:27:03.172 Test: blockdev writev readv block ...passed 00:27:03.172 Test: blockdev writev readv size > 128k ...passed 00:27:03.172 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:03.172 Test: blockdev comparev and writev ...passed 00:27:03.172 Test: blockdev nvme passthru rw ...passed 00:27:03.172 Test: blockdev nvme passthru vendor specific ...passed 00:27:03.172 Test: blockdev nvme admin passthru ...passed 00:27:03.172 Test: blockdev copy ...passed 00:27:03.172 Suite: bdevio tests on: crypto_ram 00:27:03.172 Test: blockdev write read block ...passed 00:27:03.172 Test: blockdev write zeroes read block ...passed 00:27:03.172 Test: blockdev write zeroes read no split ...passed 00:27:03.172 Test: blockdev write zeroes read split ...passed 00:27:03.172 Test: blockdev write zeroes read split partial ...passed 00:27:03.172 Test: blockdev reset ...passed 00:27:03.172 Test: blockdev write read 8 blocks ...passed 00:27:03.172 Test: blockdev write read size > 128k ...passed 00:27:03.172 Test: blockdev write read invalid size ...passed 00:27:03.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:03.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:03.172 Test: blockdev write read max offset ...passed 00:27:03.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:03.172 Test: blockdev writev readv 8 blocks ...passed 00:27:03.172 Test: blockdev writev readv 30 x 1block ...passed 00:27:03.172 Test: blockdev writev readv block ...passed 00:27:03.172 Test: blockdev writev readv size > 128k ...passed 00:27:03.173 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:03.173 Test: blockdev comparev and writev ...passed 00:27:03.173 Test: blockdev nvme passthru rw ...passed 00:27:03.173 Test: blockdev nvme passthru vendor specific ...passed 00:27:03.173 Test: blockdev nvme admin passthru ...passed 00:27:03.173 Test: blockdev copy ...passed 00:27:03.173 00:27:03.173 Run Summary: Type Total Ran Passed Failed Inactive 00:27:03.173 suites 2 2 n/a 0 0 00:27:03.173 tests 46 46 46 0 0 00:27:03.173 asserts 260 260 260 0 n/a 00:27:03.173 00:27:03.173 Elapsed time = 0.075 seconds 00:27:03.173 0 00:27:03.173 18:30:11 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 2351105 00:27:03.173 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 2351105 ']' 00:27:03.173 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 2351105 00:27:03.173 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:27:03.173 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:03.173 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2351105 00:27:03.173 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:03.173 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:03.173 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2351105' 00:27:03.173 killing process with pid 2351105 00:27:03.173 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@969 -- # kill 2351105 00:27:03.173 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@974 -- # wait 2351105 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:27:03.432 00:27:03.432 real 0m1.220s 00:27:03.432 user 0m3.221s 00:27:03.432 sys 0m0.308s 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:03.432 ************************************ 00:27:03.432 END TEST bdev_bounds 00:27:03.432 ************************************ 00:27:03.432 18:30:11 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:27:03.432 18:30:11 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:03.432 18:30:11 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:03.432 18:30:11 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:03.432 ************************************ 00:27:03.432 START TEST bdev_nbd 00:27:03.432 ************************************ 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=2351286 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 2351286 /var/tmp/spdk-nbd.sock 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 2351286 ']' 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:03.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:03.432 18:30:11 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:27:03.432 [2024-07-24 18:30:12.001251] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:27:03.432 [2024-07-24 18:30:12.001297] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:03.691 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:03.691 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:03.691 [2024-07-24 18:30:12.094079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.691 [2024-07-24 18:30:12.167120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.950 [2024-07-24 18:30:12.322835] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:03.950 [2024-07-24 18:30:12.322895] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:03.950 [2024-07-24 18:30:12.322906] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:03.950 [2024-07-24 18:30:12.330854] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:03.950 [2024-07-24 18:30:12.330866] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:03.950 [2024-07-24 18:30:12.330874] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:03.950 [2024-07-24 18:30:12.338873] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:03.950 [2024-07-24 18:30:12.338884] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:03.950 [2024-07-24 18:30:12.338891] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:04.209 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:27:04.467 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:04.467 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:04.467 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:04.468 1+0 records in 00:27:04.468 1+0 records out 00:27:04.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261408 s, 15.7 MB/s 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:04.468 18:30:12 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:27:04.726 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:27:04.726 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:27:04.726 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:27:04.726 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:04.726 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:27:04.726 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:04.726 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:04.726 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:04.726 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:27:04.726 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:04.726 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:04.726 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:04.726 1+0 records in 00:27:04.726 1+0 records out 00:27:04.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299726 s, 13.7 MB/s 00:27:04.727 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:04.727 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:27:04.727 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:04.727 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:04.727 18:30:13 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:27:04.727 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:04.727 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:04.727 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:04.985 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:04.985 { 00:27:04.985 "nbd_device": "/dev/nbd0", 00:27:04.985 "bdev_name": "crypto_ram" 00:27:04.985 }, 00:27:04.985 { 00:27:04.985 "nbd_device": "/dev/nbd1", 00:27:04.985 "bdev_name": "crypto_ram3" 00:27:04.985 } 00:27:04.985 ]' 00:27:04.985 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:04.985 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:04.985 { 00:27:04.985 "nbd_device": "/dev/nbd0", 00:27:04.985 "bdev_name": "crypto_ram" 00:27:04.985 }, 00:27:04.985 { 00:27:04.985 "nbd_device": "/dev/nbd1", 00:27:04.985 "bdev_name": "crypto_ram3" 00:27:04.985 } 00:27:04.985 ]' 00:27:04.985 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:04.985 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:04.985 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:04.985 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:04.985 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:04.985 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:04.985 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:04.985 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:05.243 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:05.501 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:05.501 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:05.501 18:30:13 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:05.501 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:27:05.760 /dev/nbd0 00:27:05.760 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:05.761 1+0 records in 00:27:05.761 1+0 records out 00:27:05.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268587 s, 15.3 MB/s 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:05.761 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:27:06.019 /dev/nbd1 00:27:06.019 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:06.019 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:06.019 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:06.019 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:27:06.019 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:06.019 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:06.019 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:06.019 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:27:06.019 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:06.020 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:06.020 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:06.020 1+0 records in 00:27:06.020 1+0 records out 00:27:06.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283198 s, 14.5 MB/s 00:27:06.020 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:06.020 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:27:06.020 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:06.020 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:06.020 18:30:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:27:06.020 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:06.020 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:06.020 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:06.020 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:06.020 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:06.279 { 00:27:06.279 "nbd_device": "/dev/nbd0", 00:27:06.279 "bdev_name": "crypto_ram" 00:27:06.279 }, 00:27:06.279 { 00:27:06.279 "nbd_device": "/dev/nbd1", 00:27:06.279 "bdev_name": "crypto_ram3" 00:27:06.279 } 00:27:06.279 ]' 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:06.279 { 00:27:06.279 "nbd_device": "/dev/nbd0", 00:27:06.279 "bdev_name": "crypto_ram" 00:27:06.279 }, 00:27:06.279 { 00:27:06.279 "nbd_device": "/dev/nbd1", 00:27:06.279 "bdev_name": "crypto_ram3" 00:27:06.279 } 00:27:06.279 ]' 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:06.279 /dev/nbd1' 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:06.279 /dev/nbd1' 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:06.279 256+0 records in 00:27:06.279 256+0 records out 00:27:06.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104359 s, 100 MB/s 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:06.279 256+0 records in 00:27:06.279 256+0 records out 00:27:06.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195685 s, 53.6 MB/s 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:06.279 256+0 records in 00:27:06.279 256+0 records out 00:27:06.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307998 s, 34.0 MB/s 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:06.279 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:06.538 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:06.538 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:06.538 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:06.538 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:06.538 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:06.538 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:06.538 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:06.538 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:06.538 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:06.538 18:30:14 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:06.796 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:06.796 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:06.796 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:06.796 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:06.796 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:06.797 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:07.056 malloc_lvol_verify 00:27:07.056 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:07.315 a48b1708-b498-42d4-9c80-4e1edd61035f 00:27:07.315 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:07.315 d47c1ab1-985a-4436-9132-a71086e724a7 00:27:07.315 18:30:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:07.574 /dev/nbd0 00:27:07.574 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:07.574 mke2fs 1.46.5 (30-Dec-2021) 00:27:07.574 Discarding device blocks: 0/4096 done 00:27:07.574 Creating filesystem with 4096 1k blocks and 1024 inodes 00:27:07.574 00:27:07.574 Allocating group tables: 0/1 done 00:27:07.574 Writing inode tables: 0/1 done 00:27:07.574 Creating journal (1024 blocks): done 00:27:07.574 Writing superblocks and filesystem accounting information: 0/1 done 00:27:07.574 00:27:07.574 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:07.574 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:07.574 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:07.574 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:07.574 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:07.574 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:07.574 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:07.574 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 2351286 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 2351286 ']' 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 2351286 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2351286 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2351286' 00:27:07.833 killing process with pid 2351286 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@969 -- # kill 2351286 00:27:07.833 18:30:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@974 -- # wait 2351286 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:27:08.093 00:27:08.093 real 0m4.532s 00:27:08.093 user 0m6.263s 00:27:08.093 sys 0m1.869s 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:27:08.093 ************************************ 00:27:08.093 END TEST bdev_nbd 00:27:08.093 ************************************ 00:27:08.093 18:30:16 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:27:08.093 18:30:16 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = nvme ']' 00:27:08.093 18:30:16 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = gpt ']' 00:27:08.093 18:30:16 blockdev_crypto_sw -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:27:08.093 18:30:16 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:08.093 18:30:16 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:08.093 18:30:16 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:08.093 ************************************ 00:27:08.093 START TEST bdev_fio 00:27:08.093 ************************************ 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:27:08.093 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:08.093 ************************************ 00:27:08.093 START TEST bdev_fio_rw_verify 00:27:08.093 ************************************ 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:27:08.093 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:08.380 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:08.380 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:08.380 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:08.380 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:08.380 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:08.380 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:08.380 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:08.380 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:08.380 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:08.380 18:30:16 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:08.650 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:08.650 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:08.650 fio-3.35 00:27:08.650 Starting 2 threads 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:08.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:08.650 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:20.843 00:27:20.843 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=2352483: Wed Jul 24 18:30:27 2024 00:27:20.843 read: IOPS=32.3k, BW=126MiB/s (132MB/s)(1261MiB/10000msec) 00:27:20.843 slat (usec): min=8, max=1327, avg=13.89, stdev= 3.81 00:27:20.843 clat (usec): min=5, max=1508, avg=99.67, stdev=40.42 00:27:20.843 lat (usec): min=17, max=1525, avg=113.57, stdev=41.61 00:27:20.843 clat percentiles (usec): 00:27:20.843 | 50.000th=[ 97], 99.000th=[ 194], 99.900th=[ 210], 99.990th=[ 233], 00:27:20.843 | 99.999th=[ 359] 00:27:20.843 write: IOPS=38.8k, BW=152MiB/s (159MB/s)(1436MiB/9475msec); 0 zone resets 00:27:20.843 slat (usec): min=9, max=254, avg=22.85, stdev= 3.49 00:27:20.843 clat (usec): min=16, max=884, avg=132.78, stdev=61.14 00:27:20.843 lat (usec): min=35, max=990, avg=155.63, stdev=62.41 00:27:20.843 clat percentiles (usec): 00:27:20.843 | 50.000th=[ 129], 99.000th=[ 265], 99.900th=[ 289], 99.990th=[ 685], 00:27:20.843 | 99.999th=[ 865] 00:27:20.843 bw ( KiB/s): min=142739, max=152896, per=94.79%, avg=147125.63, stdev=1216.73, samples=38 00:27:20.843 iops : min=35684, max=38224, avg=36781.37, stdev=304.22, samples=38 00:27:20.843 lat (usec) : 10=0.01%, 20=0.01%, 50=9.50%, 100=33.50%, 250=55.50% 00:27:20.843 lat (usec) : 500=1.48%, 750=0.01%, 1000=0.01% 00:27:20.843 lat (msec) : 2=0.01% 00:27:20.843 cpu : usr=99.72%, sys=0.00%, ctx=24, majf=0, minf=492 00:27:20.843 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:20.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:20.843 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:20.843 issued rwts: total=322722,367641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:20.843 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:20.843 00:27:20.843 Run status group 0 (all jobs): 00:27:20.843 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=1261MiB (1322MB), run=10000-10000msec 00:27:20.843 WRITE: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=1436MiB (1506MB), run=9475-9475msec 00:27:20.843 00:27:20.843 real 0m10.994s 00:27:20.843 user 0m28.647s 00:27:20.843 sys 0m0.306s 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:27:20.843 ************************************ 00:27:20.843 END TEST bdev_fio_rw_verify 00:27:20.843 ************************************ 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:27:20.843 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "bc36e49f-6bbc-590e-827b-6ab480a681b9"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "bc36e49f-6bbc-590e-827b-6ab480a681b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "e7c6ee0d-449b-538c-b6fe-d8516d06cbfd"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "e7c6ee0d-449b-538c-b6fe-d8516d06cbfd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:27:20.844 crypto_ram3 ]] 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "bc36e49f-6bbc-590e-827b-6ab480a681b9"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "bc36e49f-6bbc-590e-827b-6ab480a681b9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "e7c6ee0d-449b-538c-b6fe-d8516d06cbfd"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "e7c6ee0d-449b-538c-b6fe-d8516d06cbfd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:20.844 ************************************ 00:27:20.844 START TEST bdev_fio_trim 00:27:20.844 ************************************ 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:20.844 18:30:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:20.844 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:20.844 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:20.844 fio-3.35 00:27:20.844 Starting 2 threads 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.844 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:20.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:20.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:20.845 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:30.890 00:27:30.890 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=2354497: Wed Jul 24 18:30:38 2024 00:27:30.890 write: IOPS=54.3k, BW=212MiB/s (223MB/s)(2122MiB/10001msec); 0 zone resets 00:27:30.890 slat (usec): min=9, max=302, avg=15.81, stdev= 3.38 00:27:30.890 clat (usec): min=25, max=1443, avg=122.03, stdev=67.36 00:27:30.890 lat (usec): min=34, max=1457, avg=137.84, stdev=69.81 00:27:30.890 clat percentiles (usec): 00:27:30.890 | 50.000th=[ 98], 99.000th=[ 251], 99.900th=[ 273], 99.990th=[ 498], 00:27:30.890 | 99.999th=[ 775] 00:27:30.890 bw ( KiB/s): min=212784, max=219040, per=100.00%, avg=217531.79, stdev=715.62, samples=38 00:27:30.890 iops : min=53196, max=54760, avg=54382.95, stdev=178.91, samples=38 00:27:30.890 trim: IOPS=54.3k, BW=212MiB/s (223MB/s)(2122MiB/10001msec); 0 zone resets 00:27:30.890 slat (nsec): min=3702, max=69165, avg=7286.61, stdev=1992.24 00:27:30.890 clat (usec): min=28, max=1457, avg=81.56, stdev=24.72 00:27:30.890 lat (usec): min=33, max=1463, avg=88.85, stdev=24.85 00:27:30.890 clat percentiles (usec): 00:27:30.890 | 50.000th=[ 82], 99.000th=[ 133], 99.900th=[ 149], 99.990th=[ 297], 00:27:30.890 | 99.999th=[ 469] 00:27:30.890 bw ( KiB/s): min=212808, max=219040, per=100.00%, avg=217533.05, stdev=714.00, samples=38 00:27:30.890 iops : min=53202, max=54760, avg=54383.26, stdev=178.49, samples=38 00:27:30.890 lat (usec) : 50=12.95%, 100=49.14%, 250=37.38%, 500=0.53%, 750=0.01% 00:27:30.890 lat (usec) : 1000=0.01% 00:27:30.890 lat (msec) : 2=0.01% 00:27:30.890 cpu : usr=99.73%, sys=0.00%, ctx=21, majf=0, minf=351 00:27:30.890 IO depths : 1=7.5%, 2=17.5%, 4=60.0%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:30.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.890 complete : 0=0.0%, 4=86.9%, 8=13.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.890 issued rwts: total=0,543286,543287,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.890 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:30.890 00:27:30.890 Run status group 0 (all jobs): 00:27:30.890 WRITE: bw=212MiB/s (223MB/s), 212MiB/s-212MiB/s (223MB/s-223MB/s), io=2122MiB (2225MB), run=10001-10001msec 00:27:30.890 TRIM: bw=212MiB/s (223MB/s), 212MiB/s-212MiB/s (223MB/s-223MB/s), io=2122MiB (2225MB), run=10001-10001msec 00:27:30.890 00:27:30.890 real 0m11.046s 00:27:30.890 user 0m29.508s 00:27:30.890 sys 0m0.365s 00:27:30.890 18:30:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:30.890 18:30:38 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:27:30.890 ************************************ 00:27:30.890 END TEST bdev_fio_trim 00:27:30.890 ************************************ 00:27:30.890 18:30:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:27:30.890 18:30:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:30.890 18:30:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:27:30.890 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:30.890 18:30:38 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:30.890 00:27:30.890 real 0m22.388s 00:27:30.890 user 0m58.331s 00:27:30.890 sys 0m0.867s 00:27:30.890 18:30:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:30.890 18:30:38 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:30.890 ************************************ 00:27:30.890 END TEST bdev_fio 00:27:30.890 ************************************ 00:27:30.890 18:30:38 blockdev_crypto_sw -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:30.890 18:30:38 blockdev_crypto_sw -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:30.890 18:30:38 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:27:30.890 18:30:38 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:30.890 18:30:38 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:30.890 ************************************ 00:27:30.890 START TEST bdev_verify 00:27:30.890 ************************************ 00:27:30.890 18:30:39 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:30.890 [2024-07-24 18:30:39.082428] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:27:30.890 [2024-07-24 18:30:39.082478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356133 ] 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.890 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:30.891 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.891 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:30.891 [2024-07-24 18:30:39.176957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:30.891 [2024-07-24 18:30:39.247633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.891 [2024-07-24 18:30:39.247633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.891 [2024-07-24 18:30:39.405401] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:30.891 [2024-07-24 18:30:39.405454] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:30.891 [2024-07-24 18:30:39.405466] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:30.891 [2024-07-24 18:30:39.413417] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:30.891 [2024-07-24 18:30:39.413436] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:30.891 [2024-07-24 18:30:39.413444] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:30.891 [2024-07-24 18:30:39.421437] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:30.891 [2024-07-24 18:30:39.421450] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:30.891 [2024-07-24 18:30:39.421458] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:30.891 Running I/O for 5 seconds... 00:27:36.157 00:27:36.157 Latency(us) 00:27:36.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.157 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:36.157 Verification LBA range: start 0x0 length 0x800 00:27:36.157 crypto_ram : 5.01 9127.83 35.66 0.00 0.00 13977.97 1199.31 18454.94 00:27:36.157 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:36.157 Verification LBA range: start 0x800 length 0x800 00:27:36.157 crypto_ram : 5.02 9314.78 36.39 0.00 0.00 13697.36 1146.88 18350.08 00:27:36.157 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:36.157 Verification LBA range: start 0x0 length 0x800 00:27:36.157 crypto_ram3 : 5.02 4587.98 17.92 0.00 0.00 27779.07 1448.35 19818.09 00:27:36.157 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:36.157 Verification LBA range: start 0x800 length 0x800 00:27:36.157 crypto_ram3 : 5.02 4664.96 18.22 0.00 0.00 27321.81 1494.22 19398.66 00:27:36.157 =================================================================================================================== 00:27:36.157 Total : 27695.55 108.19 0.00 0.00 18423.77 1146.88 19818.09 00:27:36.157 00:27:36.157 real 0m5.656s 00:27:36.157 user 0m10.761s 00:27:36.157 sys 0m0.202s 00:27:36.157 18:30:44 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:36.157 18:30:44 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:27:36.157 ************************************ 00:27:36.157 END TEST bdev_verify 00:27:36.157 ************************************ 00:27:36.157 18:30:44 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:36.157 18:30:44 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:27:36.157 18:30:44 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:36.157 18:30:44 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:36.416 ************************************ 00:27:36.416 START TEST bdev_verify_big_io 00:27:36.416 ************************************ 00:27:36.416 18:30:44 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:36.416 [2024-07-24 18:30:44.816131] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:27:36.416 [2024-07-24 18:30:44.816170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2357206 ] 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:36.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:36.416 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:36.416 [2024-07-24 18:30:44.905516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:36.416 [2024-07-24 18:30:44.975215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.416 [2024-07-24 18:30:44.975218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.675 [2024-07-24 18:30:45.129140] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:36.675 [2024-07-24 18:30:45.129187] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:36.675 [2024-07-24 18:30:45.129197] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:36.675 [2024-07-24 18:30:45.137170] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:36.675 [2024-07-24 18:30:45.137189] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:36.675 [2024-07-24 18:30:45.137197] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:36.675 [2024-07-24 18:30:45.145183] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:36.675 [2024-07-24 18:30:45.145196] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:36.675 [2024-07-24 18:30:45.145204] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:36.675 Running I/O for 5 seconds... 00:27:41.935 00:27:41.935 Latency(us) 00:27:41.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.935 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:41.935 Verification LBA range: start 0x0 length 0x80 00:27:41.935 crypto_ram : 5.21 713.04 44.56 0.00 0.00 176483.92 4220.52 239914.19 00:27:41.935 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:41.935 Verification LBA range: start 0x80 length 0x80 00:27:41.935 crypto_ram : 5.22 710.84 44.43 0.00 0.00 176978.58 4168.09 243269.63 00:27:41.935 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:41.935 Verification LBA range: start 0x0 length 0x80 00:27:41.935 crypto_ram3 : 5.22 368.11 23.01 0.00 0.00 333617.27 4535.09 253335.96 00:27:41.935 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:41.935 Verification LBA range: start 0x80 length 0x80 00:27:41.935 crypto_ram3 : 5.23 367.05 22.94 0.00 0.00 334685.16 3827.30 253335.96 00:27:41.935 =================================================================================================================== 00:27:41.935 Total : 2159.04 134.94 0.00 0.00 230397.15 3827.30 253335.96 00:27:42.193 00:27:42.193 real 0m5.864s 00:27:42.193 user 0m11.204s 00:27:42.193 sys 0m0.186s 00:27:42.193 18:30:50 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:42.193 18:30:50 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:27:42.193 ************************************ 00:27:42.193 END TEST bdev_verify_big_io 00:27:42.193 ************************************ 00:27:42.193 18:30:50 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:42.193 18:30:50 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:27:42.193 18:30:50 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:42.193 18:30:50 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:42.193 ************************************ 00:27:42.193 START TEST bdev_write_zeroes 00:27:42.193 ************************************ 00:27:42.193 18:30:50 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:42.193 [2024-07-24 18:30:50.765622] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:27:42.193 [2024-07-24 18:30:50.765668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358222 ] 00:27:42.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.451 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:42.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.451 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:42.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.451 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:42.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.451 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:42.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.451 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:42.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.451 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:42.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.451 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:42.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.451 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:42.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.451 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:42.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.451 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:42.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.451 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:42.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.451 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:42.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:42.452 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:42.452 [2024-07-24 18:30:50.859089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.452 [2024-07-24 18:30:50.929035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.710 [2024-07-24 18:30:51.087543] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:42.710 [2024-07-24 18:30:51.087600] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:42.710 [2024-07-24 18:30:51.087610] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:42.710 [2024-07-24 18:30:51.095560] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:42.710 [2024-07-24 18:30:51.095573] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:42.710 [2024-07-24 18:30:51.095581] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:42.710 [2024-07-24 18:30:51.103580] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:42.710 [2024-07-24 18:30:51.103594] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:42.710 [2024-07-24 18:30:51.103601] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:42.710 Running I/O for 1 seconds... 00:27:43.644 00:27:43.644 Latency(us) 00:27:43.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.644 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:43.644 crypto_ram : 1.01 42534.73 166.15 0.00 0.00 3003.13 822.48 4325.38 00:27:43.644 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:43.644 crypto_ram3 : 1.01 21239.39 82.97 0.00 0.00 5998.89 3722.44 6501.17 00:27:43.644 =================================================================================================================== 00:27:43.644 Total : 63774.12 249.12 0.00 0.00 4001.71 822.48 6501.17 00:27:43.902 00:27:43.902 real 0m1.620s 00:27:43.902 user 0m1.412s 00:27:43.902 sys 0m0.189s 00:27:43.902 18:30:52 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:43.902 18:30:52 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:27:43.902 ************************************ 00:27:43.902 END TEST bdev_write_zeroes 00:27:43.902 ************************************ 00:27:43.902 18:30:52 blockdev_crypto_sw -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:43.903 18:30:52 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:27:43.903 18:30:52 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:43.903 18:30:52 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:43.903 ************************************ 00:27:43.903 START TEST bdev_json_nonenclosed 00:27:43.903 ************************************ 00:27:43.903 18:30:52 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:43.903 [2024-07-24 18:30:52.466999] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:27:43.903 [2024-07-24 18:30:52.467042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358550 ] 00:27:44.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:44.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.162 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:44.162 [2024-07-24 18:30:52.556662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.162 [2024-07-24 18:30:52.625936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.162 [2024-07-24 18:30:52.625991] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:44.162 [2024-07-24 18:30:52.626003] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:44.162 [2024-07-24 18:30:52.626010] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:44.162 00:27:44.162 real 0m0.282s 00:27:44.162 user 0m0.163s 00:27:44.162 sys 0m0.118s 00:27:44.162 18:30:52 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.162 18:30:52 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:27:44.162 ************************************ 00:27:44.162 END TEST bdev_json_nonenclosed 00:27:44.162 ************************************ 00:27:44.162 18:30:52 blockdev_crypto_sw -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:44.162 18:30:52 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:27:44.162 18:30:52 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:44.162 18:30:52 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:44.421 ************************************ 00:27:44.421 START TEST bdev_json_nonarray 00:27:44.421 ************************************ 00:27:44.421 18:30:52 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:44.421 [2024-07-24 18:30:52.836239] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:27:44.421 [2024-07-24 18:30:52.836279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358604 ] 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:44.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.421 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:44.421 [2024-07-24 18:30:52.926208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.422 [2024-07-24 18:30:52.994769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.422 [2024-07-24 18:30:52.994826] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:44.422 [2024-07-24 18:30:52.994838] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:44.422 [2024-07-24 18:30:52.994846] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:44.681 00:27:44.681 real 0m0.284s 00:27:44.681 user 0m0.167s 00:27:44.681 sys 0m0.115s 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:27:44.681 ************************************ 00:27:44.681 END TEST bdev_json_nonarray 00:27:44.681 ************************************ 00:27:44.681 18:30:53 blockdev_crypto_sw -- bdev/blockdev.sh@786 -- # [[ crypto_sw == bdev ]] 00:27:44.681 18:30:53 blockdev_crypto_sw -- bdev/blockdev.sh@793 -- # [[ crypto_sw == gpt ]] 00:27:44.681 18:30:53 blockdev_crypto_sw -- bdev/blockdev.sh@797 -- # [[ crypto_sw == crypto_sw ]] 00:27:44.681 18:30:53 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:27:44.681 18:30:53 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:44.681 18:30:53 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:44.681 18:30:53 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:44.681 ************************************ 00:27:44.681 START TEST bdev_crypto_enomem 00:27:44.681 ************************************ 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1125 -- # bdev_crypto_enomem 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@634 -- # local base_dev=base0 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local test_dev=crypt0 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local err_dev=EE_base0 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local qd=32 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # ERR_PID=2358625 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # waitforlisten 2358625 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@831 -- # '[' -z 2358625 ']' 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:44.681 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:44.681 [2024-07-24 18:30:53.203461] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:27:44.681 [2024-07-24 18:30:53.203512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358625 ] 00:27:44.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.681 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:44.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.681 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:44.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.681 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:44.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.681 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:44.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.681 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:44.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.681 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:44.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.681 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:44.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.681 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:44.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.681 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:44.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.681 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:44.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.682 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:44.939 [2024-07-24 18:30:53.296710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.939 [2024-07-24 18:30:53.369701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.503 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:45.503 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@864 -- # return 0 00:27:45.503 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@644 -- # rpc_cmd 00:27:45.503 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.503 18:30:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:45.503 true 00:27:45.503 base0 00:27:45.503 true 00:27:45.503 [2024-07-24 18:30:54.026534] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:45.503 crypt0 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@651 -- # waitforbdev crypt0 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local bdev_name=crypt0 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@901 -- # local i 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:45.503 [ 00:27:45.503 { 00:27:45.503 "name": "crypt0", 00:27:45.503 "aliases": [ 00:27:45.503 "53d50ddd-b623-52cb-b6ac-3884f67ee45c" 00:27:45.503 ], 00:27:45.503 "product_name": "crypto", 00:27:45.503 "block_size": 512, 00:27:45.503 "num_blocks": 2097152, 00:27:45.503 "uuid": "53d50ddd-b623-52cb-b6ac-3884f67ee45c", 00:27:45.503 "assigned_rate_limits": { 00:27:45.503 "rw_ios_per_sec": 0, 00:27:45.503 "rw_mbytes_per_sec": 0, 00:27:45.503 "r_mbytes_per_sec": 0, 00:27:45.503 "w_mbytes_per_sec": 0 00:27:45.503 }, 00:27:45.503 "claimed": false, 00:27:45.503 "zoned": false, 00:27:45.503 "supported_io_types": { 00:27:45.503 "read": true, 00:27:45.503 "write": true, 00:27:45.503 "unmap": false, 00:27:45.503 "flush": false, 00:27:45.503 "reset": true, 00:27:45.503 "nvme_admin": false, 00:27:45.503 "nvme_io": false, 00:27:45.503 "nvme_io_md": false, 00:27:45.503 "write_zeroes": true, 00:27:45.503 "zcopy": false, 00:27:45.503 "get_zone_info": false, 00:27:45.503 "zone_management": false, 00:27:45.503 "zone_append": false, 00:27:45.503 "compare": false, 00:27:45.503 "compare_and_write": false, 00:27:45.503 "abort": false, 00:27:45.503 "seek_hole": false, 00:27:45.503 "seek_data": false, 00:27:45.503 "copy": false, 00:27:45.503 "nvme_iov_md": false 00:27:45.503 }, 00:27:45.503 "memory_domains": [ 00:27:45.503 { 00:27:45.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:45.503 "dma_device_type": 2 00:27:45.503 } 00:27:45.503 ], 00:27:45.503 "driver_specific": { 00:27:45.503 "crypto": { 00:27:45.503 "base_bdev_name": "EE_base0", 00:27:45.503 "name": "crypt0", 00:27:45.503 "key_name": "test_dek_sw" 00:27:45.503 } 00:27:45.503 } 00:27:45.503 } 00:27:45.503 ] 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@907 -- # return 0 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # rpcpid=2358880 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@656 -- # sleep 1 00:27:45.503 18:30:54 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:45.761 Running I/O for 5 seconds... 00:27:46.695 18:30:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:27:46.695 18:30:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.695 18:30:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:46.696 18:30:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.696 18:30:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@659 -- # wait 2358880 00:27:50.876 00:27:50.876 Latency(us) 00:27:50.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.876 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:27:50.876 crypt0 : 5.00 57994.57 226.54 0.00 0.00 549.32 252.31 760.22 00:27:50.876 =================================================================================================================== 00:27:50.876 Total : 57994.57 226.54 0.00 0.00 549.32 252.31 760.22 00:27:50.876 0 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@661 -- # rpc_cmd bdev_crypto_delete crypt0 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@663 -- # killprocess 2358625 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@950 -- # '[' -z 2358625 ']' 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # kill -0 2358625 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # uname 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2358625 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2358625' 00:27:50.876 killing process with pid 2358625 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@969 -- # kill 2358625 00:27:50.876 Received shutdown signal, test time was about 5.000000 seconds 00:27:50.876 00:27:50.876 Latency(us) 00:27:50.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.876 =================================================================================================================== 00:27:50.876 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@974 -- # wait 2358625 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # trap - SIGINT SIGTERM EXIT 00:27:50.876 00:27:50.876 real 0m6.247s 00:27:50.876 user 0m6.408s 00:27:50.876 sys 0m0.319s 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:50.876 18:30:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:50.876 ************************************ 00:27:50.876 END TEST bdev_crypto_enomem 00:27:50.876 ************************************ 00:27:50.876 18:30:59 blockdev_crypto_sw -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:27:50.876 18:30:59 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # cleanup 00:27:50.876 18:30:59 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:27:50.876 18:30:59 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:27:50.876 18:30:59 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:27:50.876 18:30:59 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:27:50.876 18:30:59 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:27:50.876 18:30:59 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:27:50.876 00:27:50.876 real 0m51.321s 00:27:50.876 user 1m40.437s 00:27:50.876 sys 0m5.514s 00:27:50.876 18:30:59 blockdev_crypto_sw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:50.876 18:30:59 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:50.876 ************************************ 00:27:50.876 END TEST blockdev_crypto_sw 00:27:50.876 ************************************ 00:27:51.135 18:30:59 -- spdk/autotest.sh@363 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:27:51.135 18:30:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:51.135 18:30:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:51.135 18:30:59 -- common/autotest_common.sh@10 -- # set +x 00:27:51.135 ************************************ 00:27:51.135 START TEST blockdev_crypto_qat 00:27:51.135 ************************************ 00:27:51.135 18:30:59 blockdev_crypto_qat -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:27:51.135 * Looking for test storage... 00:27:51.135 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # uname -s 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@681 -- # test_type=crypto_qat 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # crypto_device= 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # dek= 00:27:51.135 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # env_ctx= 00:27:51.136 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:27:51.136 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:27:51.136 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == bdev ]] 00:27:51.136 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == crypto_* ]] 00:27:51.136 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:27:51.136 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:27:51.136 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=2359756 00:27:51.136 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:51.136 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:27:51.136 18:30:59 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 2359756 00:27:51.136 18:30:59 blockdev_crypto_qat -- common/autotest_common.sh@831 -- # '[' -z 2359756 ']' 00:27:51.136 18:30:59 blockdev_crypto_qat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.136 18:30:59 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:51.136 18:30:59 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.136 18:30:59 blockdev_crypto_qat -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:51.136 18:30:59 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:51.136 [2024-07-24 18:30:59.716959] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:27:51.136 [2024-07-24 18:30:59.717013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359756 ] 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.396 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:51.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.397 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:51.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.397 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:51.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.397 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:51.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.397 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:51.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.397 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:51.397 [2024-07-24 18:30:59.811504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.397 [2024-07-24 18:30:59.885174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.992 18:31:00 blockdev_crypto_qat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:51.992 18:31:00 blockdev_crypto_qat -- common/autotest_common.sh@864 -- # return 0 00:27:51.992 18:31:00 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:27:51.992 18:31:00 blockdev_crypto_qat -- bdev/blockdev.sh@707 -- # setup_crypto_qat_conf 00:27:51.992 18:31:00 blockdev_crypto_qat -- bdev/blockdev.sh@169 -- # rpc_cmd 00:27:51.992 18:31:00 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.992 18:31:00 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:51.992 [2024-07-24 18:31:00.523108] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:51.992 [2024-07-24 18:31:00.531138] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:51.992 [2024-07-24 18:31:00.539153] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:52.251 [2024-07-24 18:31:00.602078] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:54.787 true 00:27:54.787 true 00:27:54.787 true 00:27:54.787 true 00:27:54.787 Malloc0 00:27:54.787 Malloc1 00:27:54.787 Malloc2 00:27:54.787 Malloc3 00:27:54.787 [2024-07-24 18:31:02.896298] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:54.787 crypto_ram 00:27:54.787 [2024-07-24 18:31:02.904313] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:54.787 crypto_ram1 00:27:54.787 [2024-07-24 18:31:02.912333] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:54.787 crypto_ram2 00:27:54.787 [2024-07-24 18:31:02.920363] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:54.787 crypto_ram3 00:27:54.787 [ 00:27:54.787 { 00:27:54.787 "name": "Malloc1", 00:27:54.787 "aliases": [ 00:27:54.787 "e8214cf7-bc49-43dd-88fb-04c065ec6517" 00:27:54.787 ], 00:27:54.787 "product_name": "Malloc disk", 00:27:54.787 "block_size": 512, 00:27:54.787 "num_blocks": 65536, 00:27:54.787 "uuid": "e8214cf7-bc49-43dd-88fb-04c065ec6517", 00:27:54.787 "assigned_rate_limits": { 00:27:54.787 "rw_ios_per_sec": 0, 00:27:54.787 "rw_mbytes_per_sec": 0, 00:27:54.787 "r_mbytes_per_sec": 0, 00:27:54.787 "w_mbytes_per_sec": 0 00:27:54.787 }, 00:27:54.787 "claimed": true, 00:27:54.787 "claim_type": "exclusive_write", 00:27:54.787 "zoned": false, 00:27:54.787 "supported_io_types": { 00:27:54.787 "read": true, 00:27:54.787 "write": true, 00:27:54.787 "unmap": true, 00:27:54.787 "flush": true, 00:27:54.787 "reset": true, 00:27:54.787 "nvme_admin": false, 00:27:54.787 "nvme_io": false, 00:27:54.787 "nvme_io_md": false, 00:27:54.787 "write_zeroes": true, 00:27:54.787 "zcopy": true, 00:27:54.787 "get_zone_info": false, 00:27:54.787 "zone_management": false, 00:27:54.787 "zone_append": false, 00:27:54.787 "compare": false, 00:27:54.787 "compare_and_write": false, 00:27:54.787 "abort": true, 00:27:54.787 "seek_hole": false, 00:27:54.787 "seek_data": false, 00:27:54.787 "copy": true, 00:27:54.787 "nvme_iov_md": false 00:27:54.787 }, 00:27:54.787 "memory_domains": [ 00:27:54.787 { 00:27:54.787 "dma_device_id": "system", 00:27:54.787 "dma_device_type": 1 00:27:54.787 }, 00:27:54.787 { 00:27:54.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:54.787 "dma_device_type": 2 00:27:54.787 } 00:27:54.787 ], 00:27:54.787 "driver_specific": {} 00:27:54.787 } 00:27:54.787 ] 00:27:54.787 18:31:02 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.787 18:31:02 blockdev_crypto_qat -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:27:54.787 18:31:02 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.787 18:31:02 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:54.787 18:31:02 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.787 18:31:02 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # cat 00:27:54.787 18:31:02 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:27:54.787 18:31:02 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.787 18:31:02 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:54.787 18:31:02 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.787 18:31:02 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:27:54.787 18:31:02 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.788 18:31:02 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.788 18:31:03 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.788 18:31:03 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:27:54.788 18:31:03 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:27:54.788 18:31:03 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.788 18:31:03 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:27:54.788 18:31:03 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r .name 00:27:54.788 18:31:03 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "5fc82f66-e263-5cba-aac0-4dd2f58ff631"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5fc82f66-e263-5cba-aac0-4dd2f58ff631",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "5fd2dd29-6ab1-5040-b85a-fd442c1ca3d7"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5fd2dd29-6ab1-5040-b85a-fd442c1ca3d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "c264b3f1-179b-5fd2-89f5-b9ba0874ee86"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "c264b3f1-179b-5fd2-89f5-b9ba0874ee86",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "77d22ca1-c572-52fc-a110-f8d0c39986fc"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "77d22ca1-c572-52fc-a110-f8d0c39986fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:27:54.788 18:31:03 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:27:54.788 18:31:03 blockdev_crypto_qat -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:27:54.788 18:31:03 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:27:54.788 18:31:03 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # killprocess 2359756 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@950 -- # '[' -z 2359756 ']' 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # kill -0 2359756 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # uname 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2359756 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2359756' 00:27:54.788 killing process with pid 2359756 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@969 -- # kill 2359756 00:27:54.788 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@974 -- # wait 2359756 00:27:55.047 18:31:03 blockdev_crypto_qat -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:55.047 18:31:03 blockdev_crypto_qat -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:27:55.047 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:55.047 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:55.047 18:31:03 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:55.047 ************************************ 00:27:55.047 START TEST bdev_hello_world 00:27:55.047 ************************************ 00:27:55.047 18:31:03 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:27:55.307 [2024-07-24 18:31:03.683769] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:27:55.307 [2024-07-24 18:31:03.683821] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360528 ] 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:55.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:55.307 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:55.307 [2024-07-24 18:31:03.773484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.307 [2024-07-24 18:31:03.843321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.307 [2024-07-24 18:31:03.864256] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:55.307 [2024-07-24 18:31:03.872283] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:55.307 [2024-07-24 18:31:03.880300] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:55.567 [2024-07-24 18:31:03.981666] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:58.104 [2024-07-24 18:31:06.136296] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:58.104 [2024-07-24 18:31:06.136350] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:58.104 [2024-07-24 18:31:06.136361] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:58.104 [2024-07-24 18:31:06.144325] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:58.104 [2024-07-24 18:31:06.144338] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:58.105 [2024-07-24 18:31:06.144345] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:58.105 [2024-07-24 18:31:06.152334] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:58.105 [2024-07-24 18:31:06.152346] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:58.105 [2024-07-24 18:31:06.152354] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:58.105 [2024-07-24 18:31:06.160355] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:58.105 [2024-07-24 18:31:06.160367] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:58.105 [2024-07-24 18:31:06.160374] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:58.105 [2024-07-24 18:31:06.228295] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:58.105 [2024-07-24 18:31:06.228329] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:27:58.105 [2024-07-24 18:31:06.228342] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:58.105 [2024-07-24 18:31:06.229238] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:58.105 [2024-07-24 18:31:06.229293] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:58.105 [2024-07-24 18:31:06.229305] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:58.105 [2024-07-24 18:31:06.229335] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:58.105 00:27:58.105 [2024-07-24 18:31:06.229348] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:58.105 00:27:58.105 real 0m2.901s 00:27:58.105 user 0m2.565s 00:27:58.105 sys 0m0.301s 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:27:58.105 ************************************ 00:27:58.105 END TEST bdev_hello_world 00:27:58.105 ************************************ 00:27:58.105 18:31:06 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:27:58.105 18:31:06 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:58.105 18:31:06 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:58.105 18:31:06 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:58.105 ************************************ 00:27:58.105 START TEST bdev_bounds 00:27:58.105 ************************************ 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=2361009 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 2361009' 00:27:58.105 Process bdevio pid: 2361009 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 2361009 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 2361009 ']' 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:58.105 18:31:06 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:27:58.105 [2024-07-24 18:31:06.661179] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:27:58.105 [2024-07-24 18:31:06.661224] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2361009 ] 00:27:58.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.364 EAL: Requested device 0000:b3:01.0 cannot be used 00:27:58.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.364 EAL: Requested device 0000:b3:01.1 cannot be used 00:27:58.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.364 EAL: Requested device 0000:b3:01.2 cannot be used 00:27:58.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.364 EAL: Requested device 0000:b3:01.3 cannot be used 00:27:58.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.364 EAL: Requested device 0000:b3:01.4 cannot be used 00:27:58.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.364 EAL: Requested device 0000:b3:01.5 cannot be used 00:27:58.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.364 EAL: Requested device 0000:b3:01.6 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b3:01.7 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b3:02.0 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b3:02.1 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b3:02.2 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b3:02.3 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b3:02.4 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b3:02.5 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b3:02.6 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b3:02.7 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:01.0 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:01.1 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:01.2 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:01.3 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:01.4 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:01.5 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:01.6 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:01.7 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:02.0 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:02.1 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:02.2 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:02.3 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:02.4 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:02.5 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:02.6 cannot be used 00:27:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:58.365 EAL: Requested device 0000:b5:02.7 cannot be used 00:27:58.365 [2024-07-24 18:31:06.753600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:58.365 [2024-07-24 18:31:06.829203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.365 [2024-07-24 18:31:06.829298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.365 [2024-07-24 18:31:06.829300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.365 [2024-07-24 18:31:06.850257] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:58.365 [2024-07-24 18:31:06.858284] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:58.365 [2024-07-24 18:31:06.866305] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:58.624 [2024-07-24 18:31:06.961125] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:00.531 [2024-07-24 18:31:09.115792] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:00.531 [2024-07-24 18:31:09.115858] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:00.531 [2024-07-24 18:31:09.115869] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:00.531 [2024-07-24 18:31:09.123803] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:00.531 [2024-07-24 18:31:09.123817] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:00.531 [2024-07-24 18:31:09.123825] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:00.790 [2024-07-24 18:31:09.131824] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:00.790 [2024-07-24 18:31:09.131838] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:00.790 [2024-07-24 18:31:09.131846] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:00.790 [2024-07-24 18:31:09.139847] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:00.790 [2024-07-24 18:31:09.139859] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:00.790 [2024-07-24 18:31:09.139867] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:00.790 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.790 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:28:00.790 18:31:09 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:00.790 I/O targets: 00:28:00.790 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:28:00.790 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:28:00.790 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:28:00.790 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:28:00.790 00:28:00.790 00:28:00.790 CUnit - A unit testing framework for C - Version 2.1-3 00:28:00.790 http://cunit.sourceforge.net/ 00:28:00.790 00:28:00.790 00:28:00.790 Suite: bdevio tests on: crypto_ram3 00:28:00.790 Test: blockdev write read block ...passed 00:28:00.790 Test: blockdev write zeroes read block ...passed 00:28:00.790 Test: blockdev write zeroes read no split ...passed 00:28:00.790 Test: blockdev write zeroes read split ...passed 00:28:00.790 Test: blockdev write zeroes read split partial ...passed 00:28:00.790 Test: blockdev reset ...passed 00:28:00.790 Test: blockdev write read 8 blocks ...passed 00:28:00.790 Test: blockdev write read size > 128k ...passed 00:28:00.790 Test: blockdev write read invalid size ...passed 00:28:00.790 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:00.790 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:00.790 Test: blockdev write read max offset ...passed 00:28:00.790 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:00.790 Test: blockdev writev readv 8 blocks ...passed 00:28:00.790 Test: blockdev writev readv 30 x 1block ...passed 00:28:00.790 Test: blockdev writev readv block ...passed 00:28:00.790 Test: blockdev writev readv size > 128k ...passed 00:28:00.791 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:00.791 Test: blockdev comparev and writev ...passed 00:28:00.791 Test: blockdev nvme passthru rw ...passed 00:28:00.791 Test: blockdev nvme passthru vendor specific ...passed 00:28:00.791 Test: blockdev nvme admin passthru ...passed 00:28:00.791 Test: blockdev copy ...passed 00:28:00.791 Suite: bdevio tests on: crypto_ram2 00:28:00.791 Test: blockdev write read block ...passed 00:28:00.791 Test: blockdev write zeroes read block ...passed 00:28:00.791 Test: blockdev write zeroes read no split ...passed 00:28:00.791 Test: blockdev write zeroes read split ...passed 00:28:00.791 Test: blockdev write zeroes read split partial ...passed 00:28:00.791 Test: blockdev reset ...passed 00:28:00.791 Test: blockdev write read 8 blocks ...passed 00:28:00.791 Test: blockdev write read size > 128k ...passed 00:28:00.791 Test: blockdev write read invalid size ...passed 00:28:00.791 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:00.791 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:00.791 Test: blockdev write read max offset ...passed 00:28:00.791 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:00.791 Test: blockdev writev readv 8 blocks ...passed 00:28:00.791 Test: blockdev writev readv 30 x 1block ...passed 00:28:00.791 Test: blockdev writev readv block ...passed 00:28:00.791 Test: blockdev writev readv size > 128k ...passed 00:28:00.791 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:00.791 Test: blockdev comparev and writev ...passed 00:28:00.791 Test: blockdev nvme passthru rw ...passed 00:28:00.791 Test: blockdev nvme passthru vendor specific ...passed 00:28:00.791 Test: blockdev nvme admin passthru ...passed 00:28:00.791 Test: blockdev copy ...passed 00:28:00.791 Suite: bdevio tests on: crypto_ram1 00:28:00.791 Test: blockdev write read block ...passed 00:28:00.791 Test: blockdev write zeroes read block ...passed 00:28:00.791 Test: blockdev write zeroes read no split ...passed 00:28:01.050 Test: blockdev write zeroes read split ...passed 00:28:01.050 Test: blockdev write zeroes read split partial ...passed 00:28:01.050 Test: blockdev reset ...passed 00:28:01.050 Test: blockdev write read 8 blocks ...passed 00:28:01.050 Test: blockdev write read size > 128k ...passed 00:28:01.050 Test: blockdev write read invalid size ...passed 00:28:01.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:01.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:01.050 Test: blockdev write read max offset ...passed 00:28:01.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:01.050 Test: blockdev writev readv 8 blocks ...passed 00:28:01.050 Test: blockdev writev readv 30 x 1block ...passed 00:28:01.050 Test: blockdev writev readv block ...passed 00:28:01.050 Test: blockdev writev readv size > 128k ...passed 00:28:01.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:01.050 Test: blockdev comparev and writev ...passed 00:28:01.050 Test: blockdev nvme passthru rw ...passed 00:28:01.050 Test: blockdev nvme passthru vendor specific ...passed 00:28:01.050 Test: blockdev nvme admin passthru ...passed 00:28:01.050 Test: blockdev copy ...passed 00:28:01.050 Suite: bdevio tests on: crypto_ram 00:28:01.050 Test: blockdev write read block ...passed 00:28:01.050 Test: blockdev write zeroes read block ...passed 00:28:01.050 Test: blockdev write zeroes read no split ...passed 00:28:01.050 Test: blockdev write zeroes read split ...passed 00:28:01.050 Test: blockdev write zeroes read split partial ...passed 00:28:01.050 Test: blockdev reset ...passed 00:28:01.050 Test: blockdev write read 8 blocks ...passed 00:28:01.050 Test: blockdev write read size > 128k ...passed 00:28:01.050 Test: blockdev write read invalid size ...passed 00:28:01.050 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:01.050 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:01.050 Test: blockdev write read max offset ...passed 00:28:01.050 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:01.050 Test: blockdev writev readv 8 blocks ...passed 00:28:01.050 Test: blockdev writev readv 30 x 1block ...passed 00:28:01.050 Test: blockdev writev readv block ...passed 00:28:01.050 Test: blockdev writev readv size > 128k ...passed 00:28:01.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:01.050 Test: blockdev comparev and writev ...passed 00:28:01.050 Test: blockdev nvme passthru rw ...passed 00:28:01.050 Test: blockdev nvme passthru vendor specific ...passed 00:28:01.050 Test: blockdev nvme admin passthru ...passed 00:28:01.050 Test: blockdev copy ...passed 00:28:01.050 00:28:01.050 Run Summary: Type Total Ran Passed Failed Inactive 00:28:01.050 suites 4 4 n/a 0 0 00:28:01.050 tests 92 92 92 0 0 00:28:01.050 asserts 520 520 520 0 n/a 00:28:01.050 00:28:01.050 Elapsed time = 0.495 seconds 00:28:01.050 0 00:28:01.050 18:31:09 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 2361009 00:28:01.050 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 2361009 ']' 00:28:01.050 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 2361009 00:28:01.050 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:28:01.050 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:01.050 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2361009 00:28:01.050 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:01.050 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:01.050 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2361009' 00:28:01.050 killing process with pid 2361009 00:28:01.050 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@969 -- # kill 2361009 00:28:01.050 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@974 -- # wait 2361009 00:28:01.619 18:31:09 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:28:01.619 00:28:01.619 real 0m3.339s 00:28:01.619 user 0m9.337s 00:28:01.619 sys 0m0.491s 00:28:01.619 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:01.619 18:31:09 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:01.619 ************************************ 00:28:01.619 END TEST bdev_bounds 00:28:01.619 ************************************ 00:28:01.619 18:31:09 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:28:01.619 18:31:09 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:01.619 18:31:09 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:01.619 18:31:09 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:01.619 ************************************ 00:28:01.619 START TEST bdev_nbd 00:28:01.619 ************************************ 00:28:01.619 18:31:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:28:01.619 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:28:01.619 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:28:01.619 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:01.619 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:28:01.619 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:28:01.619 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:28:01.619 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:28:01.619 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:28:01.619 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:01.619 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=2361673 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 2361673 /var/tmp/spdk-nbd.sock 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 2361673 ']' 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:01.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:01.620 18:31:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:01.620 [2024-07-24 18:31:10.092644] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:28:01.620 [2024-07-24 18:31:10.092698] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:01.0 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:01.1 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:01.2 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:01.3 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:01.4 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:01.5 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:01.6 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:01.7 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:02.0 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:02.1 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:02.2 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:02.3 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:02.4 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:02.5 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:02.6 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b3:02.7 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:01.0 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:01.1 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:01.2 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:01.3 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:01.4 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:01.5 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:01.6 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:01.7 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:02.0 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:02.1 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:02.2 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:02.3 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:02.4 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:02.5 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:02.6 cannot be used 00:28:01.620 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.620 EAL: Requested device 0000:b5:02.7 cannot be used 00:28:01.620 [2024-07-24 18:31:10.185413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.879 [2024-07-24 18:31:10.258999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.879 [2024-07-24 18:31:10.279883] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:28:01.879 [2024-07-24 18:31:10.287907] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:01.879 [2024-07-24 18:31:10.295922] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:01.879 [2024-07-24 18:31:10.389510] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:04.418 [2024-07-24 18:31:12.543218] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:04.418 [2024-07-24 18:31:12.543273] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:04.418 [2024-07-24 18:31:12.543283] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:04.418 [2024-07-24 18:31:12.551237] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:04.418 [2024-07-24 18:31:12.551249] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:04.418 [2024-07-24 18:31:12.551257] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:04.418 [2024-07-24 18:31:12.559255] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:04.418 [2024-07-24 18:31:12.559266] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:04.418 [2024-07-24 18:31:12.559273] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:04.418 [2024-07-24 18:31:12.567274] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:04.418 [2024-07-24 18:31:12.567285] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:04.418 [2024-07-24 18:31:12.567292] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:04.418 1+0 records in 00:28:04.418 1+0 records out 00:28:04.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205658 s, 19.9 MB/s 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:28:04.418 18:31:12 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:04.678 1+0 records in 00:28:04.678 1+0 records out 00:28:04.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302196 s, 13.6 MB/s 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:28:04.678 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:28:04.935 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:28:04.935 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:04.936 1+0 records in 00:28:04.936 1+0 records out 00:28:04.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296002 s, 13.8 MB/s 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:04.936 1+0 records in 00:28:04.936 1+0 records out 00:28:04.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318837 s, 12.8 MB/s 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:28:04.936 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:05.193 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:05.193 { 00:28:05.193 "nbd_device": "/dev/nbd0", 00:28:05.193 "bdev_name": "crypto_ram" 00:28:05.193 }, 00:28:05.193 { 00:28:05.193 "nbd_device": "/dev/nbd1", 00:28:05.193 "bdev_name": "crypto_ram1" 00:28:05.193 }, 00:28:05.193 { 00:28:05.193 "nbd_device": "/dev/nbd2", 00:28:05.193 "bdev_name": "crypto_ram2" 00:28:05.193 }, 00:28:05.193 { 00:28:05.193 "nbd_device": "/dev/nbd3", 00:28:05.193 "bdev_name": "crypto_ram3" 00:28:05.193 } 00:28:05.193 ]' 00:28:05.193 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:05.193 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:05.193 { 00:28:05.193 "nbd_device": "/dev/nbd0", 00:28:05.193 "bdev_name": "crypto_ram" 00:28:05.193 }, 00:28:05.193 { 00:28:05.193 "nbd_device": "/dev/nbd1", 00:28:05.193 "bdev_name": "crypto_ram1" 00:28:05.193 }, 00:28:05.193 { 00:28:05.193 "nbd_device": "/dev/nbd2", 00:28:05.193 "bdev_name": "crypto_ram2" 00:28:05.193 }, 00:28:05.193 { 00:28:05.193 "nbd_device": "/dev/nbd3", 00:28:05.193 "bdev_name": "crypto_ram3" 00:28:05.193 } 00:28:05.193 ]' 00:28:05.193 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:05.193 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:28:05.193 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:05.193 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:28:05.193 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:05.193 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:05.193 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:05.193 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:05.452 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:05.452 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:05.452 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:05.452 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:05.452 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:05.452 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:05.452 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:05.452 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:05.452 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:05.452 18:31:13 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:05.710 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:28:05.968 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:28:05.968 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:28:05.968 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:28:05.968 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:05.968 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:05.968 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:28:05.968 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:05.968 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:05.968 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:05.968 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:05.968 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:28:06.227 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:28:06.486 /dev/nbd0 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:06.486 1+0 records in 00:28:06.486 1+0 records out 00:28:06.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276919 s, 14.8 MB/s 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:28:06.486 18:31:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:28:06.486 /dev/nbd1 00:28:06.486 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:06.486 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:06.486 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:06.486 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:06.486 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:06.486 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:06.486 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:06.486 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:06.486 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:06.486 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:06.486 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:06.486 1+0 records in 00:28:06.486 1+0 records out 00:28:06.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178905 s, 22.9 MB/s 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:28:06.746 /dev/nbd10 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:06.746 1+0 records in 00:28:06.746 1+0 records out 00:28:06.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287636 s, 14.2 MB/s 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:28:06.746 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:28:07.005 /dev/nbd11 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:07.005 1+0 records in 00:28:07.005 1+0 records out 00:28:07.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285319 s, 14.4 MB/s 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:07.005 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:07.006 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:07.265 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:07.265 { 00:28:07.265 "nbd_device": "/dev/nbd0", 00:28:07.265 "bdev_name": "crypto_ram" 00:28:07.265 }, 00:28:07.265 { 00:28:07.265 "nbd_device": "/dev/nbd1", 00:28:07.265 "bdev_name": "crypto_ram1" 00:28:07.265 }, 00:28:07.265 { 00:28:07.265 "nbd_device": "/dev/nbd10", 00:28:07.265 "bdev_name": "crypto_ram2" 00:28:07.265 }, 00:28:07.265 { 00:28:07.265 "nbd_device": "/dev/nbd11", 00:28:07.265 "bdev_name": "crypto_ram3" 00:28:07.265 } 00:28:07.265 ]' 00:28:07.265 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:07.265 { 00:28:07.265 "nbd_device": "/dev/nbd0", 00:28:07.265 "bdev_name": "crypto_ram" 00:28:07.265 }, 00:28:07.265 { 00:28:07.265 "nbd_device": "/dev/nbd1", 00:28:07.265 "bdev_name": "crypto_ram1" 00:28:07.265 }, 00:28:07.265 { 00:28:07.265 "nbd_device": "/dev/nbd10", 00:28:07.265 "bdev_name": "crypto_ram2" 00:28:07.265 }, 00:28:07.265 { 00:28:07.265 "nbd_device": "/dev/nbd11", 00:28:07.265 "bdev_name": "crypto_ram3" 00:28:07.265 } 00:28:07.265 ]' 00:28:07.265 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:07.265 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:07.265 /dev/nbd1 00:28:07.265 /dev/nbd10 00:28:07.265 /dev/nbd11' 00:28:07.265 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:07.265 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:07.265 /dev/nbd1 00:28:07.265 /dev/nbd10 00:28:07.265 /dev/nbd11' 00:28:07.265 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:07.266 256+0 records in 00:28:07.266 256+0 records out 00:28:07.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113949 s, 92.0 MB/s 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:07.266 256+0 records in 00:28:07.266 256+0 records out 00:28:07.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0544733 s, 19.2 MB/s 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:07.266 256+0 records in 00:28:07.266 256+0 records out 00:28:07.266 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0439853 s, 23.8 MB/s 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:07.266 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:28:07.524 256+0 records in 00:28:07.524 256+0 records out 00:28:07.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0385699 s, 27.2 MB/s 00:28:07.524 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:07.524 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:28:07.524 256+0 records in 00:28:07.524 256+0 records out 00:28:07.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0368831 s, 28.4 MB/s 00:28:07.524 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:28:07.524 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:07.524 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:07.525 18:31:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:07.784 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:08.043 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:28:08.302 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:28:08.302 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:28:08.302 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:28:08.302 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:08.302 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:08.302 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:28:08.302 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:08.302 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:08.302 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:08.302 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:08.302 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:08.561 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:08.561 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:08.561 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:08.561 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:08.561 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:08.561 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:08.561 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:08.561 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:08.561 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:08.561 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:28:08.561 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:08.561 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:28:08.562 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:28:08.562 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:08.562 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:28:08.562 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:28:08.562 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:28:08.562 18:31:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:08.562 malloc_lvol_verify 00:28:08.821 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:08.821 6c3a34d3-af87-44a4-a265-59852f22407e 00:28:08.821 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:09.080 2cf6483c-6614-46d4-8aee-a46af90a800f 00:28:09.080 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:09.339 /dev/nbd0 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:28:09.339 mke2fs 1.46.5 (30-Dec-2021) 00:28:09.339 Discarding device blocks: 0/4096 done 00:28:09.339 Creating filesystem with 4096 1k blocks and 1024 inodes 00:28:09.339 00:28:09.339 Allocating group tables: 0/1 done 00:28:09.339 Writing inode tables: 0/1 done 00:28:09.339 Creating journal (1024 blocks): done 00:28:09.339 Writing superblocks and filesystem accounting information: 0/1 done 00:28:09.339 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 2361673 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 2361673 ']' 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 2361673 00:28:09.339 18:31:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:28:09.598 18:31:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:09.598 18:31:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2361673 00:28:09.598 18:31:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:09.598 18:31:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:09.598 18:31:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2361673' 00:28:09.598 killing process with pid 2361673 00:28:09.598 18:31:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@969 -- # kill 2361673 00:28:09.598 18:31:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@974 -- # wait 2361673 00:28:09.859 18:31:18 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:28:09.859 00:28:09.859 real 0m8.420s 00:28:09.859 user 0m10.542s 00:28:09.859 sys 0m3.208s 00:28:09.859 18:31:18 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:09.859 18:31:18 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:09.859 ************************************ 00:28:09.859 END TEST bdev_nbd 00:28:09.859 ************************************ 00:28:10.154 18:31:18 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:28:10.154 18:31:18 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = nvme ']' 00:28:10.154 18:31:18 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = gpt ']' 00:28:10.154 18:31:18 blockdev_crypto_qat -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:28:10.154 18:31:18 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:10.154 18:31:18 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:10.154 18:31:18 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:10.154 ************************************ 00:28:10.154 START TEST bdev_fio 00:28:10.154 ************************************ 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:28:10.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:28:10.154 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram1]' 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram1 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:10.155 ************************************ 00:28:10.155 START TEST bdev_fio_rw_verify 00:28:10.155 ************************************ 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:10.155 18:31:18 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:10.413 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:10.413 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:10.413 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:10.413 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:10.413 fio-3.35 00:28:10.413 Starting 4 threads 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:01.0 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:01.1 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:01.2 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:01.3 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:01.4 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:01.5 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:01.6 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:01.7 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:02.0 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:02.1 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:02.2 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:02.3 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:02.4 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:02.5 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:02.6 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b3:02.7 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:01.0 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:01.1 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:01.2 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:01.3 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:01.4 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:01.5 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:01.6 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:01.7 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:02.0 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:02.1 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:02.2 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:02.3 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:02.4 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:02.5 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:02.6 cannot be used 00:28:10.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:10.671 EAL: Requested device 0000:b5:02.7 cannot be used 00:28:25.550 00:28:25.550 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2363877: Wed Jul 24 18:31:31 2024 00:28:25.550 read: IOPS=30.4k, BW=119MiB/s (125MB/s)(1188MiB/10001msec) 00:28:25.550 slat (usec): min=11, max=422, avg=45.83, stdev=33.12 00:28:25.550 clat (usec): min=14, max=3193, avg=255.32, stdev=184.16 00:28:25.550 lat (usec): min=37, max=3451, avg=301.14, stdev=203.79 00:28:25.550 clat percentiles (usec): 00:28:25.550 | 50.000th=[ 196], 99.000th=[ 922], 99.900th=[ 1106], 99.990th=[ 1287], 00:28:25.550 | 99.999th=[ 2540] 00:28:25.550 write: IOPS=33.2k, BW=130MiB/s (136MB/s)(1269MiB/9783msec); 0 zone resets 00:28:25.550 slat (usec): min=17, max=1021, avg=54.38, stdev=33.00 00:28:25.550 clat (usec): min=13, max=2812, avg=285.00, stdev=189.86 00:28:25.550 lat (usec): min=44, max=3190, avg=339.39, stdev=209.02 00:28:25.550 clat percentiles (usec): 00:28:25.550 | 50.000th=[ 235], 99.000th=[ 963], 99.900th=[ 1172], 99.990th=[ 1582], 00:28:25.550 | 99.999th=[ 2311] 00:28:25.550 bw ( KiB/s): min=109288, max=173296, per=98.18%, avg=130446.89, stdev=4224.11, samples=76 00:28:25.550 iops : min=27322, max=43324, avg=32611.68, stdev=1056.01, samples=76 00:28:25.550 lat (usec) : 20=0.01%, 50=0.06%, 100=10.97%, 250=48.05%, 500=30.24% 00:28:25.550 lat (usec) : 750=7.14%, 1000=2.91% 00:28:25.550 lat (msec) : 2=0.61%, 4=0.01% 00:28:25.550 cpu : usr=99.71%, sys=0.00%, ctx=63, majf=0, minf=216 00:28:25.550 IO depths : 1=1.7%, 2=28.1%, 4=56.2%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:25.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.550 complete : 0=0.0%, 4=87.7%, 8=12.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:25.550 issued rwts: total=304163,324961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:25.550 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:25.550 00:28:25.550 Run status group 0 (all jobs): 00:28:25.550 READ: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=1188MiB (1246MB), run=10001-10001msec 00:28:25.550 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=1269MiB (1331MB), run=9783-9783msec 00:28:25.550 00:28:25.550 real 0m13.324s 00:28:25.550 user 0m50.417s 00:28:25.550 sys 0m0.445s 00:28:25.550 18:31:31 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:25.550 18:31:31 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:28:25.550 ************************************ 00:28:25.550 END TEST bdev_fio_rw_verify 00:28:25.550 ************************************ 00:28:25.550 18:31:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:28:25.550 18:31:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:25.550 18:31:31 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:28:25.550 18:31:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:25.550 18:31:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:28:25.550 18:31:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:28:25.550 18:31:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:28:25.551 18:31:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:28:25.551 18:31:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:28:25.551 18:31:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:28:25.551 18:31:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:28:25.551 18:31:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:25.551 18:31:31 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "5fc82f66-e263-5cba-aac0-4dd2f58ff631"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5fc82f66-e263-5cba-aac0-4dd2f58ff631",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "5fd2dd29-6ab1-5040-b85a-fd442c1ca3d7"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5fd2dd29-6ab1-5040-b85a-fd442c1ca3d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "c264b3f1-179b-5fd2-89f5-b9ba0874ee86"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "c264b3f1-179b-5fd2-89f5-b9ba0874ee86",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "77d22ca1-c572-52fc-a110-f8d0c39986fc"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "77d22ca1-c572-52fc-a110-f8d0c39986fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:28:25.551 crypto_ram1 00:28:25.551 crypto_ram2 00:28:25.551 crypto_ram3 ]] 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "5fc82f66-e263-5cba-aac0-4dd2f58ff631"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5fc82f66-e263-5cba-aac0-4dd2f58ff631",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "5fd2dd29-6ab1-5040-b85a-fd442c1ca3d7"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5fd2dd29-6ab1-5040-b85a-fd442c1ca3d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "c264b3f1-179b-5fd2-89f5-b9ba0874ee86"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "c264b3f1-179b-5fd2-89f5-b9ba0874ee86",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "77d22ca1-c572-52fc-a110-f8d0c39986fc"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "77d22ca1-c572-52fc-a110-f8d0c39986fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram1]' 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram1 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:25.551 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:25.552 ************************************ 00:28:25.552 START TEST bdev_fio_trim 00:28:25.552 ************************************ 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:25.552 18:31:32 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:25.552 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:25.552 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:25.552 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:25.552 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:25.552 fio-3.35 00:28:25.552 Starting 4 threads 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:01.0 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:01.1 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:01.2 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:01.3 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:01.4 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:01.5 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:01.6 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:01.7 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:02.0 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:02.1 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:02.2 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:02.3 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:02.4 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:02.5 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:02.6 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b3:02.7 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:01.0 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:01.1 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:01.2 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:01.3 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:01.4 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:01.5 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:01.6 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:01.7 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:02.0 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:02.1 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:02.2 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:02.3 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:02.4 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:02.5 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:02.6 cannot be used 00:28:25.552 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:25.552 EAL: Requested device 0000:b5:02.7 cannot be used 00:28:37.765 00:28:37.765 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=2366171: Wed Jul 24 18:31:45 2024 00:28:37.765 write: IOPS=51.8k, BW=202MiB/s (212MB/s)(2022MiB/10001msec); 0 zone resets 00:28:37.765 slat (usec): min=12, max=379, avg=46.32, stdev=30.59 00:28:37.765 clat (usec): min=14, max=1269, avg=161.10, stdev=99.69 00:28:37.765 lat (usec): min=35, max=1392, avg=207.43, stdev=116.31 00:28:37.765 clat percentiles (usec): 00:28:37.765 | 50.000th=[ 141], 99.000th=[ 515], 99.900th=[ 627], 99.990th=[ 725], 00:28:37.765 | 99.999th=[ 1123] 00:28:37.765 bw ( KiB/s): min=193568, max=282016, per=100.00%, avg=207787.79, stdev=6751.31, samples=76 00:28:37.765 iops : min=48392, max=70504, avg=51946.95, stdev=1687.83, samples=76 00:28:37.765 trim: IOPS=51.8k, BW=202MiB/s (212MB/s)(2022MiB/10001msec); 0 zone resets 00:28:37.765 slat (usec): min=3, max=122, avg=12.03, stdev= 5.44 00:28:37.765 clat (usec): min=24, max=1393, avg=207.55, stdev=116.31 00:28:37.765 lat (usec): min=36, max=1436, avg=219.58, stdev=118.50 00:28:37.766 clat percentiles (usec): 00:28:37.766 | 50.000th=[ 180], 99.000th=[ 619], 99.900th=[ 750], 99.990th=[ 865], 00:28:37.766 | 99.999th=[ 1287] 00:28:37.766 bw ( KiB/s): min=193568, max=282016, per=100.00%, avg=207787.79, stdev=6751.31, samples=76 00:28:37.766 iops : min=48392, max=70504, avg=51946.95, stdev=1687.83, samples=76 00:28:37.766 lat (usec) : 20=0.01%, 50=2.85%, 100=18.12%, 250=58.81%, 500=17.95% 00:28:37.766 lat (usec) : 750=2.22%, 1000=0.05% 00:28:37.766 lat (msec) : 2=0.01% 00:28:37.766 cpu : usr=99.70%, sys=0.00%, ctx=60, majf=0, minf=90 00:28:37.766 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.766 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.766 issued rwts: total=0,517701,517702,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.766 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:37.766 00:28:37.766 Run status group 0 (all jobs): 00:28:37.766 WRITE: bw=202MiB/s (212MB/s), 202MiB/s-202MiB/s (212MB/s-212MB/s), io=2022MiB (2121MB), run=10001-10001msec 00:28:37.766 TRIM: bw=202MiB/s (212MB/s), 202MiB/s-202MiB/s (212MB/s-212MB/s), io=2022MiB (2121MB), run=10001-10001msec 00:28:37.766 00:28:37.766 real 0m13.337s 00:28:37.766 user 0m50.664s 00:28:37.766 sys 0m0.469s 00:28:37.766 18:31:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:37.766 18:31:45 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:28:37.766 ************************************ 00:28:37.766 END TEST bdev_fio_trim 00:28:37.766 ************************************ 00:28:37.766 18:31:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:28:37.766 18:31:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:37.766 18:31:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:28:37.766 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:37.766 18:31:45 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:28:37.766 00:28:37.766 real 0m26.989s 00:28:37.766 user 1m41.243s 00:28:37.766 sys 0m1.103s 00:28:37.766 18:31:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:37.766 18:31:45 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:37.766 ************************************ 00:28:37.766 END TEST bdev_fio 00:28:37.766 ************************************ 00:28:37.766 18:31:45 blockdev_crypto_qat -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:37.766 18:31:45 blockdev_crypto_qat -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:37.766 18:31:45 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:28:37.766 18:31:45 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:37.766 18:31:45 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:37.766 ************************************ 00:28:37.766 START TEST bdev_verify 00:28:37.766 ************************************ 00:28:37.766 18:31:45 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:37.766 [2024-07-24 18:31:45.624699] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:28:37.766 [2024-07-24 18:31:45.624739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368037 ] 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:01.0 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:01.1 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:01.2 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:01.3 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:01.4 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:01.5 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:01.6 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:01.7 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:02.0 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:02.1 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:02.2 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:02.3 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:02.4 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:02.5 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:02.6 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b3:02.7 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:01.0 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:01.1 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:01.2 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:01.3 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:01.4 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:01.5 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:01.6 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:01.7 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:02.0 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:02.1 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:02.2 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:02.3 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:02.4 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:02.5 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:02.6 cannot be used 00:28:37.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:37.766 EAL: Requested device 0000:b5:02.7 cannot be used 00:28:37.766 [2024-07-24 18:31:45.713281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:37.766 [2024-07-24 18:31:45.783815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.766 [2024-07-24 18:31:45.783818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.766 [2024-07-24 18:31:45.804793] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:28:37.766 [2024-07-24 18:31:45.812821] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:37.766 [2024-07-24 18:31:45.820843] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:37.766 [2024-07-24 18:31:45.916631] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:39.670 [2024-07-24 18:31:48.077004] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:39.670 [2024-07-24 18:31:48.077074] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:39.670 [2024-07-24 18:31:48.077086] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:39.670 [2024-07-24 18:31:48.085023] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:39.670 [2024-07-24 18:31:48.085038] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:39.670 [2024-07-24 18:31:48.085046] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:39.670 [2024-07-24 18:31:48.093044] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:39.671 [2024-07-24 18:31:48.093057] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:39.671 [2024-07-24 18:31:48.093065] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:39.671 [2024-07-24 18:31:48.101065] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:39.671 [2024-07-24 18:31:48.101078] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:39.671 [2024-07-24 18:31:48.101085] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:39.671 Running I/O for 5 seconds... 00:28:44.945 00:28:44.945 Latency(us) 00:28:44.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.945 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:44.945 Verification LBA range: start 0x0 length 0x1000 00:28:44.945 crypto_ram : 5.04 736.34 2.88 0.00 0.00 173575.01 8074.04 110729.63 00:28:44.946 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:44.946 Verification LBA range: start 0x1000 length 0x1000 00:28:44.946 crypto_ram : 5.04 736.61 2.88 0.00 0.00 173278.67 8493.47 108213.04 00:28:44.946 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:44.946 Verification LBA range: start 0x0 length 0x1000 00:28:44.946 crypto_ram1 : 5.04 736.05 2.88 0.00 0.00 173232.52 9594.47 101082.73 00:28:44.946 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:44.946 Verification LBA range: start 0x1000 length 0x1000 00:28:44.946 crypto_ram1 : 5.04 739.51 2.89 0.00 0.00 172367.90 407.96 99405.00 00:28:44.946 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:44.946 Verification LBA range: start 0x0 length 0x1000 00:28:44.946 crypto_ram2 : 5.03 5797.20 22.65 0.00 0.00 21937.45 4561.31 17196.65 00:28:44.946 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:44.946 Verification LBA range: start 0x1000 length 0x1000 00:28:44.946 crypto_ram2 : 5.04 5846.48 22.84 0.00 0.00 21769.11 3407.87 17511.22 00:28:44.946 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:44.946 Verification LBA range: start 0x0 length 0x1000 00:28:44.946 crypto_ram3 : 5.04 5806.14 22.68 0.00 0.00 21871.62 547.23 16672.36 00:28:44.946 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:44.946 Verification LBA range: start 0x1000 length 0x1000 00:28:44.946 crypto_ram3 : 5.04 5845.16 22.83 0.00 0.00 21739.02 3067.08 17196.65 00:28:44.946 =================================================================================================================== 00:28:44.946 Total : 26243.49 102.51 0.00 0.00 38841.78 407.96 110729.63 00:28:45.207 00:28:45.207 real 0m7.975s 00:28:45.207 user 0m15.302s 00:28:45.207 sys 0m0.295s 00:28:45.207 18:31:53 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:45.207 18:31:53 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:28:45.207 ************************************ 00:28:45.207 END TEST bdev_verify 00:28:45.207 ************************************ 00:28:45.207 18:31:53 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:45.207 18:31:53 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:28:45.207 18:31:53 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:45.207 18:31:53 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:45.207 ************************************ 00:28:45.207 START TEST bdev_verify_big_io 00:28:45.207 ************************************ 00:28:45.207 18:31:53 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:45.207 [2024-07-24 18:31:53.679881] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:28:45.207 [2024-07-24 18:31:53.679921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369381 ] 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:01.0 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:01.1 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:01.2 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:01.3 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:01.4 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:01.5 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:01.6 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:01.7 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:02.0 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:02.1 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:02.2 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:02.3 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:02.4 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:02.5 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:02.6 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b3:02.7 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:01.0 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:01.1 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:01.2 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:01.3 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:01.4 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:01.5 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:01.6 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:01.7 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:02.0 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:02.1 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:02.2 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:02.3 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:02.4 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:02.5 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:02.6 cannot be used 00:28:45.207 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:45.207 EAL: Requested device 0000:b5:02.7 cannot be used 00:28:45.207 [2024-07-24 18:31:53.769534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:45.466 [2024-07-24 18:31:53.841269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.466 [2024-07-24 18:31:53.841271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.466 [2024-07-24 18:31:53.862359] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:28:45.466 [2024-07-24 18:31:53.870388] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:45.466 [2024-07-24 18:31:53.878410] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:45.466 [2024-07-24 18:31:53.980018] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:48.001 [2024-07-24 18:31:56.134497] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:48.001 [2024-07-24 18:31:56.134564] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:48.001 [2024-07-24 18:31:56.134575] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:48.001 [2024-07-24 18:31:56.142516] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:48.001 [2024-07-24 18:31:56.142534] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:48.001 [2024-07-24 18:31:56.142542] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:48.001 [2024-07-24 18:31:56.150538] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:48.001 [2024-07-24 18:31:56.150552] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:48.001 [2024-07-24 18:31:56.150559] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:48.001 [2024-07-24 18:31:56.158560] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:48.001 [2024-07-24 18:31:56.158573] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:48.001 [2024-07-24 18:31:56.158580] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:48.001 Running I/O for 5 seconds... 00:28:48.263 [2024-07-24 18:31:56.744442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.744722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.744781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.744830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.744869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.744897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.745152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.745165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.747750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.747783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.747822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.747849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.748243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.748275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.748317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.748356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.748615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.748631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.751262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.751293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.751321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.751349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.751619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.751652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.751683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.751724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.752104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.752117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.754585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.754631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.754669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.754696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.754980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.755010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.755037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.755064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.755372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.755386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.757793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.757833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.757860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.757887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.758248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.758278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.758306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.758333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.758644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.758658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.760997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.761033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.761060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.761088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.761419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.761450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.761483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.761511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.761766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.761779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.764242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.764277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.764305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.764332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.263 [2024-07-24 18:31:56.764674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.764706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.764734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.764769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.765143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.765157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.767553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.767586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.767615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.767646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.767919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.767948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.767975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.768003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.768311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.768324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.770667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.770698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.770735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.770762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.771130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.771161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.771191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.771224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.771546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.771559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.773792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.773823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.773850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.773877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.774222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.774254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.774281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.774309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.774587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.774599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.776895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.776927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.776954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.776981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.777308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.777339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.777379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.777406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.777686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.777699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.780057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.780088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.780117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.780144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.780427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.780458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.780485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.780516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.780776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.780789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.783312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.783346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.783384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.783435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.783786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.783827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.783856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.783882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.784153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.784165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.786489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.786521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.786561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.786598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.786909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.786942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.786970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.786996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.787305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.787319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.789567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.789608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.789639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.789666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.789958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.789989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.790017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.790045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.790370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.790383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.792591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.792639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.792666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.792692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.793026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.793058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.264 [2024-07-24 18:31:56.793087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.793115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.793427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.793440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.795685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.795717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.795744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.795774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.796120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.796151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.796178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.796205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.796486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.796498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.798665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.798697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.798723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.798751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.799052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.799082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.799110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.799137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.799459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.799473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.801533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.801564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.801591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.801618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.801976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.802007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.802035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.802063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.802372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.802385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.804509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.804540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.804566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.804592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.804926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.804957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.804986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.805014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.805301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.805314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.807487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.807519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.807546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.807573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.807913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.807944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.807974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.808009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.808259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.808275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.810459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.810491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.810519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.810546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.810819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.810850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.810877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.810919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.811200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.811213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.813641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.813672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.813711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.813770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.814089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.814133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.814172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.814199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.814488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.814501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.816702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.816753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.816779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.816831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.817157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.817191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.817218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.817245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.817573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.817593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.819621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.819657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.819684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.819711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.820032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.820063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.820091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.820118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.265 [2024-07-24 18:31:56.820427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.820440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.822351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.822381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.822406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.822431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.822762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.822793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.822821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.822849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.823112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.823125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.825227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.825258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.825285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.825314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.825658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.825689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.825717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.825751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.826115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.826129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.828134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.828165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.828192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.828219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.828508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.828536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.828563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.828590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.828915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.828928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.830920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.830952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.830979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.831005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.831353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.831386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.831414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.831442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.831740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.831753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.833052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.833083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.833110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.833136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.833382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.833411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.833438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.833464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.833647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.833659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.835586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.835621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.835651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.835677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.836025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.836056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.836084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.836111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.836372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.836385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.837622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.837655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.837681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.837700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.837926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.837957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.837984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.838011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.838186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.838197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.840178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.840438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.841473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.842419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.843564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.843996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.844780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.845676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.845856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.845867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.847865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.848593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.849394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.850345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.851268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.852212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.853053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.854004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.854182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.266 [2024-07-24 18:31:56.854195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.857673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.858678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.859079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.859946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.861092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.862112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.862371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.862624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.862940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.862953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.865497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.866268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.867162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.867956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.869096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.869636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.869898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.870149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.870480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.870494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.872811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.873271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.874074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.875011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.876166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.876420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.876676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.876929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.877250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.877263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.879103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.880186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.881157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.882175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.882762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.883016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.883269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.883525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.883863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.883877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.885729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.886528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.887484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.888436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.888949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.889201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.889451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.889708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.889889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.889902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.891914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.892872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.893833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.894560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.895117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.895380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.895635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.896279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.530 [2024-07-24 18:31:56.896489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.896501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.898481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.899439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.900448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.900711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.901294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.901546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.901802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.902733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.902915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.902926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.904970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.905827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.906414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.906675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.907243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.907497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.908238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.909036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.909214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.909225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.911454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.912465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.912722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.912979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.913510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.913967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.914746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.915701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.915881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.915892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.918013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.918378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.918637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.918893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.919471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.920531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.921489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.922530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.922716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.922727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.924615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.924876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.925133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.925395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.926474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.927270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.928209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.929162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.929407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.929419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.930744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.931001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.931254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.931511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.932559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.933514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.934465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.935232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.935431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.935443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.936876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.937119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.937360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.937694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.938761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.939720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.940740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.941300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.941514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.941525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.943036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.943292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.943546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.944533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.945720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.946677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.947104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.947951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.948126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.948137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.531 [2024-07-24 18:31:56.949783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.950040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.950674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.951473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.952615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.953412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.954264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.955059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.955241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.955251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.957237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.957492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.958398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.959381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.960513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.961010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.961803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.962757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.962938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.962949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.964801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.965743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.966592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.967554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.968223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.969170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.970213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.971185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.971365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.971376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.973761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.974557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.975513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.976468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.977489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.978283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.979232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.980169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.980523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.980535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.983424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.984445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.985399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.986269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.987291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.988236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.989182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.989837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.990163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.990176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.992610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.993563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.994514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.994960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.996198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.997175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.998027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.998282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.998602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:56.998616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.001094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.002037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.002471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.003386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.004523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.005487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.005748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.006000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.006271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.006284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.008638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.009496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.010334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.011135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.012274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.012880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.013149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.013402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.013759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.013772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.015902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.016326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.017126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.018070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.019283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.532 [2024-07-24 18:31:57.019536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.019793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.020045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.020368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.020381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.022132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.023117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.024004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.024961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.025596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.025857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.026113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.026366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.026645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.026658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.028259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.029062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.030024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.030980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.031499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.031760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.032015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.032271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.032454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.032466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.034617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.035585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.036629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.037572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.038069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.038311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.038549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.039233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.039472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.039484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.041482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.042361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.042617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.042876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.043411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.043674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.043936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.044192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.044509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.044522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.046448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.046708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.046963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.047219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.047845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.048101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.048353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.048604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.048910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.048923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.050862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.051122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.051381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.051411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.051957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.052210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.052462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.052722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.052965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.052980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.054985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.055246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.055501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.055759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.055794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.056132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.056394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.056661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.056925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.057180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.057477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.057490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.059235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.059266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.059294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.059321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.059633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.059672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.059701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.059728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.059756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.060072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.060085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.061717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.061751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.061778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.061806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.062094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.062131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.062160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.062187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.062215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.062508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.062521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.064147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.064179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.064206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.064236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.064523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.064561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.064589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.064617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.064648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.064980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.064994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.066743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.066784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.066812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.066840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.067170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.067208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.067237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.067264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.067292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.067564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.067577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.069221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.069252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.069279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.069306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.069622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.069667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.069697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.069723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.069762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.070050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.070064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.071708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.071745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.071773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.071801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.072094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.072145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.072186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.072213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.072242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.072487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.072500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.074287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.074319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.074346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.074383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.533 [2024-07-24 18:31:57.074669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.074714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.074756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.074783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.074826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.075092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.075105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.077040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.077088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.077130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.077159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.077414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.077461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.077490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.077518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.077547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.077865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.077882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.079593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.079628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.079667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.079695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.080050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.080107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.080136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.080163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.080191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.080510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.080525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.082179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.082211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.082239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.082267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.082553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.082590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.082619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.082650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.082677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.082999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.083012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.084787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.084820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.084847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.084874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.085185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.085222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.085251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.085282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.085310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.085589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.085602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.087258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.087289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.087320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.087347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.087662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.087702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.087730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.087762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.087790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.088129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.088142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.089766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.089799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.089826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.089854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.090128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.090166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.090194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.090222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.090250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.090567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.090580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.092264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.092296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.092335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.092362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.092692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.092736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.092767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.092794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.092821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.093133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.093145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.094859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.094889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.094915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.094942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.095262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.095300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.095330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.095358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.095386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.095658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.095671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.097337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.097368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.097398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.097425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.097727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.097774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.097829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.097868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.097895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.098140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.098152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.099874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.099905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.099938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.099966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.100223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.100272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.100301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.100351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.100386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.100654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.100667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.102693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.102738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.102788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.102817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.103071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.103119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.103148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.103176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.103205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.103499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.103512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.105220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.105252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.105281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.105317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.105567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.105613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.105649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.534 [2024-07-24 18:31:57.105677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.105704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.106027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.106041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.107731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.107773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.107804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.107834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.108130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.108168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.108198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.108226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.108254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.108568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.108581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.110295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.110329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.110357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.110383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.110698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.110741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.110770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.110797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.110824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.111102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.111115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.112600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.112635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.112661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.112688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.112931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.112978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.113018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.113047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.113079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.113402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.113415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.114935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.114969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.114995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.115021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.115192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.115235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.115263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.115290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.115316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.115487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.115498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.116623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.116658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.116688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.116722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.116908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.116943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.116978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.117007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.117035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.117331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.117344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.119040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.119071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.119097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.119123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.119325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.119369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.119401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.119427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.119458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.119636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.119648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.120796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.120828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.120855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.120882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.121056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.121100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.121129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.121157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.121185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.121441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.535 [2024-07-24 18:31:57.121454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.123667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.123701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.123729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.123757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.124010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.124053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.124081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.124109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.124138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.124326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.124339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.125393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.125424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.125453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.125479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.125661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.125703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.125731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.125771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.125800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.125979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.125991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.127784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.127815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.127843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.127871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.128043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.128084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.128112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.128146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.128175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.128351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.128362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.129489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.129523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.129550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.129576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.129752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.129796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.129824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.129850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.129877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.130048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.130059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.131698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.131734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.132129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.132160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.132366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.132413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.132442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.132468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.132495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.132690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.132703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.133795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.133825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.133852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.134812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.134988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.135031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.135058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.135090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.135119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.135393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.135404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.137877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.138833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.139785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.140343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.140522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.141320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.142270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.143219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.143503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.143839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.143856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.146401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.147355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.148215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.149040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.149269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.150245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.151201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.151731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.800 [2024-07-24 18:31:57.151989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.152315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.152328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.154746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.155728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.156271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.157072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.157249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.158227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.159050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.159302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.159554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.159869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.159882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.162049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.162597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.163586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.164651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.164830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.165804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.166062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.166315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.166568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.166917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.166933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.168989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.169718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.170514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.171474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.171655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.172341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.172601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.172869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.173123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.173449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.173461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.174872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.175680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.176639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.177597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.177795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.178067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.178327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.178585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.178856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.179035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.179047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.181054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.181926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.182873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.183862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.184151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.184419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.184676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.184928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.185495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.185714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.185743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.187566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.188444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.189319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.189867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.190209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.190469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.190726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.190979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.191911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.192092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.192102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.194247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.195206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.196073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.196327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.196649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.196911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.197165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.198038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.198829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.199007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.199018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.201061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.202017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.202280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.202534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.202816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.203075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.203571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.204363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.205305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.205483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.205494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.801 [2024-07-24 18:31:57.207535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.208205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.208481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.208738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.209109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.209371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.210324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.211363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.212331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.212508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.212520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.214544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.214809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.215063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.215315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.215644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.216483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.217279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.218233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.219189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.219466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.219479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.220734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.220991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.221244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.221497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.221728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.222530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.223483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.224430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.225018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.225209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.225222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.226591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.226858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.227118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.227638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.227859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.228874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.229834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.230691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.231527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.231769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.231781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.233217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.233472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.233729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.234663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.234844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.235812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.236800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.237334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.238127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.238311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.238322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.239894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.240152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.241004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.241798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.241976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.242948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.243464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.244434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.245523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.245722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.245734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.247537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.248156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.248966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.249939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.250123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.250923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.251848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.252690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.253632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.253809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.253822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.255810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.256667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.257611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.258555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.258752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.259308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.260122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.261067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.262015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.262248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.262261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.264921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.265712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.266664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.267616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.802 [2024-07-24 18:31:57.267879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.268848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.269876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.270834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.271730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.272014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.272027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.274425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.275379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.276333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.276842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.277021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.277818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.278757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.279706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.279967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.280297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.280311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.282831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.283785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.284663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.285526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.285736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.286738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.287700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.288198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.288457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.288785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.288799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.291258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.292292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.292869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.293663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.293840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.294805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.295623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.295878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.296130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.296453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.296465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.298655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.299125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.300040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.301040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.301216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.302192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.302451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.302706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.302958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.303265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.303278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.305173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.306058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.306849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.307805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.307981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.308517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.308783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.309022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.309264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.309549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.309561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.311018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.311828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.312777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.313735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.313953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.314219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.314474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.314730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.314985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.315182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.315195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.317286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.318239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.318660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.318918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.319241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.319502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.319758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.320020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.320280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.320605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.320624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.322473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.322736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.322988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.323242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.323491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.323763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.324021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.803 [2024-07-24 18:31:57.324276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.324530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.324857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.324871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.326745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.327002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.327261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.327520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.327886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.328149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.328402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.328662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.328939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.329208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.329221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.331266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.331528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.331788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.332042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.332357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.332621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.332882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.333139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.333390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.333682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.333695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.335536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.335795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.336049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.336312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.336578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.336849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.337103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.337364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.337618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.337922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.337935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.339832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.340093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.340138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.340398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.340730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.340993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.341246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.341499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.341759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.342037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.342050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.343958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.344204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.344445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.344478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.344828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.345092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.345347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.345605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.345867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.346192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.346205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.347972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.348005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.348033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.348059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.348377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.348416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.348446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.348473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.348501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.348780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.348793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.350426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.350458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.350485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.804 [2024-07-24 18:31:57.350513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.350847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.350884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.350923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.350951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.350979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.351313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.351326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.352943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.352975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.353005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.353033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.353307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.353342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.353370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.353396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.353424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.353739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.353753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.355331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.355362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.355390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.355423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.355763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.355801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.355829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.355857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.355885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.356162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.356175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.357957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.357988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.358014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.358041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.358349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.358388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.358417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.358444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.358471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.358707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.358720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.360358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.360390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.360417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.360444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.360775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.360824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.360852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.360891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.360945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.361235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.361248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.362986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.363018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.363045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.363072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.363311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.363355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.363385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.363413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.363440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.363703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.363717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.365480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.365511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.365550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.365578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.365860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.365916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.365944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.365984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.366028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.366293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.366305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.368041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.368084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.368111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.368138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.368372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.368419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.368448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.368476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.368503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.368823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.368837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.370544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.370586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.370613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.370656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.370922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.370967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.370996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.805 [2024-07-24 18:31:57.371023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.371050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.371358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.371372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.372970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.373001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.373027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.373054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.373346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.373381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.373413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.373440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.373468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.373793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.373806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.375629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.375661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.375691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.375717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.376046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.376083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.376111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.376139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.376167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.376449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.376461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.378042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.378075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.378102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.378129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.378469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.378508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.378540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.378567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.378594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.378935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.378949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.380645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.380677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.380704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.380732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.381008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.381045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.381073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.381100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.381128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.381453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.381469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.382697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.382740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.382767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.382794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.383105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.383142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.383170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.383199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.383229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.383501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.383514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.385093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.385124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.385153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.385180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.385497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.385534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.385562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.385590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.385633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.385972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.385988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.387247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.387282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.387309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.387343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.387527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.387566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.387601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.387633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.387662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.387883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.387894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.388945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.388977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.389005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.389043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.389370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.389407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.389436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.389463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.389491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.389781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:48.806 [2024-07-24 18:31:57.389795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.391212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.391246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.391273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.391300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.391473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.391517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.391543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.391570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.391597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.391903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.391919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.392930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.392960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.392989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.393023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.393322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.393369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.393397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.393423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.393451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.393761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.393775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.395192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.395223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.395249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.395275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.395444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.395488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.395515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.395542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.395568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.395741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.395752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.396855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.396898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.396927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.396954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.397127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.397168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.397196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.397228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.397255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.397575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.397588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.399166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.399204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.399231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.399265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.399438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.399472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.399516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.399545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.399572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.399747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.399759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.400884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.400914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.400943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.400968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.401141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.401182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.401210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.401236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.401263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.401561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.401574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.403459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.403490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.403516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.403541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.403745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.403795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.403823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.403850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.403876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.404045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.404057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.405127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.405158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.405185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.405211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.129 [2024-07-24 18:31:57.405383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.405424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.405452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.405496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.405525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.405702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.405714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.407461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.407492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.407520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.407547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.407723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.407762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.407789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.407823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.407851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.408028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.408040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.409129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.409164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.409193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.409222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.409395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.409436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.409464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.409490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.409517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.409693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.409704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.411285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.411316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.411345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.411372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.411687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.411731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.411762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.411788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.411815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.412015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.412028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.413109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.413139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.413165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.413191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.413410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.413452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.413481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.413507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.413534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.413707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.413719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.415378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.415417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.415673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.415705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.415968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.416010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.416037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.416063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.416090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.416294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.416306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.417380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.417411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.417437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.418329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.418499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.418539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.418566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.418593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.418630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.418819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.418831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.421049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.421856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.422795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.423737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.423958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.424790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.425586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.426536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.427488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.427811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.427824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.430761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.431821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.432816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.433750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.130 [2024-07-24 18:31:57.433991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.434791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.435744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.436697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.437391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.437665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.437678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.440046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.441001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.441953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.442379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.442557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.443501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.444505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.445577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.445839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.446160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.446173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.448531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.449473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.450219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.451130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.451337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.452303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.453253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.453741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.454000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.454334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.454347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.456858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.457860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.458397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.459196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.459376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.460344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.461220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.461470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.461724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.462022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.462034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.464191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.464746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.465748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.466784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.466965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.467935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.468218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.468471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.468727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.469053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.469067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.471163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.471914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.472707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.473662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.473844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.474505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.474761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.475014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.475268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.475596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.475609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.477024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.477832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.478776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.479730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.479908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.480174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.480430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.480688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.480946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.481158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.481170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.483178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.484017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.484904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.485873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.486226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.486490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.486766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.487031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.487717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.487952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.487964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.489810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.490776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.491731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.492211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.492534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.492805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.493061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.493317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.131 [2024-07-24 18:31:57.494310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.494490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.494501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.496653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.497642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.498466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.498729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.499063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.499323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.499575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.500525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.501393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.501572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.501583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.503675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.504661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.504923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.505179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.505445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.505713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.506197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.506998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.507952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.508133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.508147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.510290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.510746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.511014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.511267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.511591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.511993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.512788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.513758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.514724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.514915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.514936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.516718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.516990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.517246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.517514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.517846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.518796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.519709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.520681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.521725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.522020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.522032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.523301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.523557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.523814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.524068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.524245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.525046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.525989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.526912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.527322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.527500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.527513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.528823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.529085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.529341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.529896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.530123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.531092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.532050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.532845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.533744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.533952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.533964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.535504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.535764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.536198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.536960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.537139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.538155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.539079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.539826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.540620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.540803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.540814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.542432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.542708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.543660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.544513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.544693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.545669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.546109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.546984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.547951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.548132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.548142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.549962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.132 [2024-07-24 18:31:57.550750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.551555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.552508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.552692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.553282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.554318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.555332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.556398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.556577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.556587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.558481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.559397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.560382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.561337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.561517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.562084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.562880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.563836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.564790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.565010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.565022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.567984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.568971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.570029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.571024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.571306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.572104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.573057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.574011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.574726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.574987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.575000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.577386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.578287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.578965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.579990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.580196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.581103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.581584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.581839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.582091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.582429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.582442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.584664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.585114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.585908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.586894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.587081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.588009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.588268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.588520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.588779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.589111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.589124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.133 [2024-07-24 18:31:57.590942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.134 [2024-07-24 18:31:57.591201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.591457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.591724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.592039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.592305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.592561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.592818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.593088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.593351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.593363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.595766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.596030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.596290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.596544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.596895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.597158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.597415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.597678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.597933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.598233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.598246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.600158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.600417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.600679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.600935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.601226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.601492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.601752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.602003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.602257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.602531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.602544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.604413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.604683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.604944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.605198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.605503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.605769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.606022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.606277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.606534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.606809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.606822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.608798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.609061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.609316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.609568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.609898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.610161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.610418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.610682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.610940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.611216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.611229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.613062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.613318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.613573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.613839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.614093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.614362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.614621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.614886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.615142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.615424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.615438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.617354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.617617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.617879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.618136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.618452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.618728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.618985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.619243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.619502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.619856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.619869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.621752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.622011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.622045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.622299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.622637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.622903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.623166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.623422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.623680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.623975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.623988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.625818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.626075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.626329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.626362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.626633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.135 [2024-07-24 18:31:57.626899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.627153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.627405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.627662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.627983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.627997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.629668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.629701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.629740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.629778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.630119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.630157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.630186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.630214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.630243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.630527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.630540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.632315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.632348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.632376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.632403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.632716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.632753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.632782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.632810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.632838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.633068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.633081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.634717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.634750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.634782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.634810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.635115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.635153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.635192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.635219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.635258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.635526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.635539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.637273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.637305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.637333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.637360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.637645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.637693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.637732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.637761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.637788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.638005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.638018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.639692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.639726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.639755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.639794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.640076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.640124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.640165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.640193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.640233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.640501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.640518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.642274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.642318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.642368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.642406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.642682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.642732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.642771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.642798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.642826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.643068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.643081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.644786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.644818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.644845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.644872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.645096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.645141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.645170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.645197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.136 [2024-07-24 18:31:57.645237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.645576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.645591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.647202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.647251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.647282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.647324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.647596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.647648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.647679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.647706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.647736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.648045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.648060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.649541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.649592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.649620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.649651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.649873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.649921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.649950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.649978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.650005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.650297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.650311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.652135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.652168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.652207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.652236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.652541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.652587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.652632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.652675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.652706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.652999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.653012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.654757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.654788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.654815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.654842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.655070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.655114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.655143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.655171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.655198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.655377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.655388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.656519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.656551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.656590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.656621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.656805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.656846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.656878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.656905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.656933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.657109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.657122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.658965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.658998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.659028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.659059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.659238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.659284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.659314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.659354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.659383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.659566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.659577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.660724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.660757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.660785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.660815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.660996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.661039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.661068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.661095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.661123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.661297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.661310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.663012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.663045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.663073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.663100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.663400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.663441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.663469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.663496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.663524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.663745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.137 [2024-07-24 18:31:57.663758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.664860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.664892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.664923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.664951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.665169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.665214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.665243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.665270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.665300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.665481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.665494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.667062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.667097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.667125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.667152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.667462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.667504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.667533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.667561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.667590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.667772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.667785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.668902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.668943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.668971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.668998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.669181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.669224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.669257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.669284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.669312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.669494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.669507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.670920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.670955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.670983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.671022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.671349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.671396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.671427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.671454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.671483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.671801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.671814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.672836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.672868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.672906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.672936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.673177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.673218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.673246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.673274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.673301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.673515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.673528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.674829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.674863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.674891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.674919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.675235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.675291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.675333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.675363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.675390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.675709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.675723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.676882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.676918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.676947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.676974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.677154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.677197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.677231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.677259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.677300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.677479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.677492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.678644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.678677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.678704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.678731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.138 [2024-07-24 18:31:57.679045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.679081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.679111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.679139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.679168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.679437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.679452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.680822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.680853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.680883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.680909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.681086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.681128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.681156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.681183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.681211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.681487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.681500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.682579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.682614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.682648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.682676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.682961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.683009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.683039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.683067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.683095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.683403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.683417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.684832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.684864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.684890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.684920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.685094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.685130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.685164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.685196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.685222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.685395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.685407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.686551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.686583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.686611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.686643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.686921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.686964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.687005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.687034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.687061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.687405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.687419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.688916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.688948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.688986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.689012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.689184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.689226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.689254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.689281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.689307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.689478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.689489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.690651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.690681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.690709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.690736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.690910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.690954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.690982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.691018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.691046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.691279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.691291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.692919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.139 [2024-07-24 18:31:57.692950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.692976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.693002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.693218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.693259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.693287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.693313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.693340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.693510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.693524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.694682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.694722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.695668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.695703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.695880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.695922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.695953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.695980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.696008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.696306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.696320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.697916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.697953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.697981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.699023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.699203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.699241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.699278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.699308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.699336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.699510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.140 [2024-07-24 18:31:57.699523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.701314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.701578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.701856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.702124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.702443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.703418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.704314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.705253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.706249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.706534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.706547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.707815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.708075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.708330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.708585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.708811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.709611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.710560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.711511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.712059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.712239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.712251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.713600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.713867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.714123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.714631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.714841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.715859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.716807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.717657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.718502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.718725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.718738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.720241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.720501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.720762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.721735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.721916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.722887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.723842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.724343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.725135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.725314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.725326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.726895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.727154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.727958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.728747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.728926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.729900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.730482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.404 [2024-07-24 18:31:57.731530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.732531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.732714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.732728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.734486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.734927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.735722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.736688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.736895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.737882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.738614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.739428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.740370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.740548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.740560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.742425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.743498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.744459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.745505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.745690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.746129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.746925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.747888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.748871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.749049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.749063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.751369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.752198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.753149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.754119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.754422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.755423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.756334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.757329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.758408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.758705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.758720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.761303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.762273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.763237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.764065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.764300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.765094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.766042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.766710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.767556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.767910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.767924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.770999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.772002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.773083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.774093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.774385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.775175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.776127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.777086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.777827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.778088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.778101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.780503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.781456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.782391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.782874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.783054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.783896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.784856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.785880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.786140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.786470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.786483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.788985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.789950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.790765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.791662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.791870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.792844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.793801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.794286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.794548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.794873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.794888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.797449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.798403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.798903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.799701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.799881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.800858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.801762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.802018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.802272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.802541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.802555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.405 [2024-07-24 18:31:57.804784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.805426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.806475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.807430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.807610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.808583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.808924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.809177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.809432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.809788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.809804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.811937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.812680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.813470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.814429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.814607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.815278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.815537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.815798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.816056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.816368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.816382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.817870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.818683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.819639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.820591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.820792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.821059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.821312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.821567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.821831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.822011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.822023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.824178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.825172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.826245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.827262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.827524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.827794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.828049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.828303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.828903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.829109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.829121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.831024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.831990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.832949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.833340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.833708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.833969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.834226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.834492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.835387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.835564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.835577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.837678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.838632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.839422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.839682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.840004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.840267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.840522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.841102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.842060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.842237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.842249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.843873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.844138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.844395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.844655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.844967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.846034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.847030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.848100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.849105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.849389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.849401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.850694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.850952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.851211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.851466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.851741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.852008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.852266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.852521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.852782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.853104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.853117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.854966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.855224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.855480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.855745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.406 [2024-07-24 18:31:57.856017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.856282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.856538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.856801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.857056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.857308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.857321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.859326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.859588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.859853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.860107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.860429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.860700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.860957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.861221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.861479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.861795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.861812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.863771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.864031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.864290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.864546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.864812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.865078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.865337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.865593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.865868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.866183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.866197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.868115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.868375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.868651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.868910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.869250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.869515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.869778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.870034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.870293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.870560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.870573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.872743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.873006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.873266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.873520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.873839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.874104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.874360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.874617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.874895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.875203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.875216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.877203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.877462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.877735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.877991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.878287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.878551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.878814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.879068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.879326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.879641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.879654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.881288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.882237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.882498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.882758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.883049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.883315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.884281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.884540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.884802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.884980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.884993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.887685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.887948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.888204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.889218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.889555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.889827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.890087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.890345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.891313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.891680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.891694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.893611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.893878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.894890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.895148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.895449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.896456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.896721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.407 [2024-07-24 18:31:57.896978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.897235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.897501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.897514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.899152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.899414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.899448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.899714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.899986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.901033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.901304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.901631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.902477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.902813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.902827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.904550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.904911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.905729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.905768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.906096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.906365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.906638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.907073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.907810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.908134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.908147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.909713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.909756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.909784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.909811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.910030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.910076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.910107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.910135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.910166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.910349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.910361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.912044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.912077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.912105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.912133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.912408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.912456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.912498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.912539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.912567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.912816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.912830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.914268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.914302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.914329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.914356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.914703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.914750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.914781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.914809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.914839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.915156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.915168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.916749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.916781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.916810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.916837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.917060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.917098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.917126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.917152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.917181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.917462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.917475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.919048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.919080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.919115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.919153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.919519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.919558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.919588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.919615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.919647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.919921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.919933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.408 [2024-07-24 18:31:57.921496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.921530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.921558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.921586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.921767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.921807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.921835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.921862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.921898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.922243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.922257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.924005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.924049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.924089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.924119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.924300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.924342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.924372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.924401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.924429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.924662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.924674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.926231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.926274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.926308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.926336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.926512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.926562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.926593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.926624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.926656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.926994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.927007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.928710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.928743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.928771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.928799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.929108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.929158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.929188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.929217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.929245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.929561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.929574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.931413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.931445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.931472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.931499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.931824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.931862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.931892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.931920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.931950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.932130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.932143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.933291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.933323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.933350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.933378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.933601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.933647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.933676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.933703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.933730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.933908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.933919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.935666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.935710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.935741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.935768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.936095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.936133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.936162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.936190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.936219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.936447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.409 [2024-07-24 18:31:57.936459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.937616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.937665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.937693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.937719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.937898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.937945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.937978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.938006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.938033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.938215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.938227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.939694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.939727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.939771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.939809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.940169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.940207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.940238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.940266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.940295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.940586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.940598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.941672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.941705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.941734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.941762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.941943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.941982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.942013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.942047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.942088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.942269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.942282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.943756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.943788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.943818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.943846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.944112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.944157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.944187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.944215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.944243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.944559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.944575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.947821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.947857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.947884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.947911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.948085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.948130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.948159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.948187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.948214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.948390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.948401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.950698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.950738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.950765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.950792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.950970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.951013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.951041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.951069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.951096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.951271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.951283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.954049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.954085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.954112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.954139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.954451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.954489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.954518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.954547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.954582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.954936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.954953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.957452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.957488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.957518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.957545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.957817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.957861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.957888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.957916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.957943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.958121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.958133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.961297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.961334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.961362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.410 [2024-07-24 18:31:57.961390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.961610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.961662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.961692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.961720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.961749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.961938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.961950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.965237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.965274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.965302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.965330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.965619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.965672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.965706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.965734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.965764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.966077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.966090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.968589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.968635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.968664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.968692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.968892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.968932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.968960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.968987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.969016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.969219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.969231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.971991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.972028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.972060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.972088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.972313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.972352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.972380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.972407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.972435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.972658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.972670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.975700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.975738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.975766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.975796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.976002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.976048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.976078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.976106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.976146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.976487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.976500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.979344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.979388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.979415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.979442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.979619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.979665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.979694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.979723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.979751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.979933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.979945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.982564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.982600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.982631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.982658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.982974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.983011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.983040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.983069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.983096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.983281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.983292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.985686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.985743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.985996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.986030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.986057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.411 [2024-07-24 18:31:57.986233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.003095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.003147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.003392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.003431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.003674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.003713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.004283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.004498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.004510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.004521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.009634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.009903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.010842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.011024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.011036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.013181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.014140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.014869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.015130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.015712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.015975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.016981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.017904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.018086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.018099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.020217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.021091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.021353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.021612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.022185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.023081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.023890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.024861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.025043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.025056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.027307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.027573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.027838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.028097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.029128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.029940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.030907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.031871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.032133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.032146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.033463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.033735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.033997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.034256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.035264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.036227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.037186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.037791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.037975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.037988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.039390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.039656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.039908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.040590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.041826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.042786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.043509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.044456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.044673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.044687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.046296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.046556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.047139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.047950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.049089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.049920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.050772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.051567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.051752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.051765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.053500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.053940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.054746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.055700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.056895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.057575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.058358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.059335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.059517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.059530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.061491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.062350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.676 [2024-07-24 18:31:58.063317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.064282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.064998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.065797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.066755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.067722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.067955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.067969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.070991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.072047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.073086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.074046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.075116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.076090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.077066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.077783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.078082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.078096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.080504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.081509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.082575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.083186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.084352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.085262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.086097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.086353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.086669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.086683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.089149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.090107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.090564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.091353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.092612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.093591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.093848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.094100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.094404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.094416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.096676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.097122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.098033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.099043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.100169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.100429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.100687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.100938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.101250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.101264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.103071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.104061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.104950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.105905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.106545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.106805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.107059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.107312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.107579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.107592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.109478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.110374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.111108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.111958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.112501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.113203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.113655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.113913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.114093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.114105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.116262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.117235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.118259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.119316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.119827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.120083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.120337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.121022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.121252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.121265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.123129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.124106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.125056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.125675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.126234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.126488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.126745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.127000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.127274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.127287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.129470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.129740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.677 [2024-07-24 18:31:58.129996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.130252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.130834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.131109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.131373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.131632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.131897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.131910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.133784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.134044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.134297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.134551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.135078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.135333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.135583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.135842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.136163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.136177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.138045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.138307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.138566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.138826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.139351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.139608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.139878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.140137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.140466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.140481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.142302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.142559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.142816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.143077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.143577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.143840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.144092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.144345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.144677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.144692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.146759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.147019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.147279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.147538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.148133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.148386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.148650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.148914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.149291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.149304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.151277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.151538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.151796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.152050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.152609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.152873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.153127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.153381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.153679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.153692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.155546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.155817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.156073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.156330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.156917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.157175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.157426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.157688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.157969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.157983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.159978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.160240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.160495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.160751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.161293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.161551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.161814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.162068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.162392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.162405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.164288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.164544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.164808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.165062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.165590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.165854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.166106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.166357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.166657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.166687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.168647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.168914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.169172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.169428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.678 [2024-07-24 18:31:58.169986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.170245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.170501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.170766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.171076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.171089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.173055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.173312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.173565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.173831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.174416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.174682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.174950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.175202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.175553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.175568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.177423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.177460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.177715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.177971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.178536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.178797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.179049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.179301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.179571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.179583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.182320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.182582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.182845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.182881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.183462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.183729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.183983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.184241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.184572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.184585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.187258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.187295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.188239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.188271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.189377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.189411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.190286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.190318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.190544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.190557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.192060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.192094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.192344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.192371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.193467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.193500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.193527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.194474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.194656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.194668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.195788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.195819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.195846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.195872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.196078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.196108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.196139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.196174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.196451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.196464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.198075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.198105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.198131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.198157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.198392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.198420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.198447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.198473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.198646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.198658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.199767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.199814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.199843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.199869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.200073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.200104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.200130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.200157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.200384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.200398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.202252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.202286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.202312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.202338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.202540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.202574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.202605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.679 [2024-07-24 18:31:58.202636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.202812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.202823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.203900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.203931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.203958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.203984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.204184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.204214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.204240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.204267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.204441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.204453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.206206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.206238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.206265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.206291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.206504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.206532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.206559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.206594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.206776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.206787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.207894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.207937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.207966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.207995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.208200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.208229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.208256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.208286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.208459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.208471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.210093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.210124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.210151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.210178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.210513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.210542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.210569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.210595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.210816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.210829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.211907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.211938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.211965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.211991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.212257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.212287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.212313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.212340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.212514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.212526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.214099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.214129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.214156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.214182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.214530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.214563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.214591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.214620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.214806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.214818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.215944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.215978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.216005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.216034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.216289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.216321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.216348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.216374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.216545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.216558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.217981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.218012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.680 [2024-07-24 18:31:58.218051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.218078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.218452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.218482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.218510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.218537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.218809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.218822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.219819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.219850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.219878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.219905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.220109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.220139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.220166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.220198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.220379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.220391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.221693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.221725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.221751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.221778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.222086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.222115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.222142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.222170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.222491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.222505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.223537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.223568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.223595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.223628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.223924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.223954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.223980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.224007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.224212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.224225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.225379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.225409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.225437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.225463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.225802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.225839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.225879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.225906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.226245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.226262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.227410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.227449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.227475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.227501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.227704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.227733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.227761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.227788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.227962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.227973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.229105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.229138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.229165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.229192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.229533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.229565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.229593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.229620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.229927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.229940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.231272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.231302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.231328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.231357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.231560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.231589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.231615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.231645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.231972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.231987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.232986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.233018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.233046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.233073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.233422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.233452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.233480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.233507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.233833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.233847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.235453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.235495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.235522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.235548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.681 [2024-07-24 18:31:58.235753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.235782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.235809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.235836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.236046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.236058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.237066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.237096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.237124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.237151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.237459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.237489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.237517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.237543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.237868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.237881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.239377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.239407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.239433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.239461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.239667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.239695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.239721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.239747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.239918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.239929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.241022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.241067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.241097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.241124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.241327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.241359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.241386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.241413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.241757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.241770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.243334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.243371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.243403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.243430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.243635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.243668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.243694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.243721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.243895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.243906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.245045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.245076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.245103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.245129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.245328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.245357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.245384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.245415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.245683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.245697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.247376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.247414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.247440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.247705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.247734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.247760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.247935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.266695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.682 [2024-07-24 18:31:58.266742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.267777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.272838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.273088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.273127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.273675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.273716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.274503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.274689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.274702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.276738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.277695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.278202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.278466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.279067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.279335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.280234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.281205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.281389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.281401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.283483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.284345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.284599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.284866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.285476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.286443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.287293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.288234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.288412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.288425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.290482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.290749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.291002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.291255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.292148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.292947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.293894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.294841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.295054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.295068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.296617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.296879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.946 [2024-07-24 18:31:58.297131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.297386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.298592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.299602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.300548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.301445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.301670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.301682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.302935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.303193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.303448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.303722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.304700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.305645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.306596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.307032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.307212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.307226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.308544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.308804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.309056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.309760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.310949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.311855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.312646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.313531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.313741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.313754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.315206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.315474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.315746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.316629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.317796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.318792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.319318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.320115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.320301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.320313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.321923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.322179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.323099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.323924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.325059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.325592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.326597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.327610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.327794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.327806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.329508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.330027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.330821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.331768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.332880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.333650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.334433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.335391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.335572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.335584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.337473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.338506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.339572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.340543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.341831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.342784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.343606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.343861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.344188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.344202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.346586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.347548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.347980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.348851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.349972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.350946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.351215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.351468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.351759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.351773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.354009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.354719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.355456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.356235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.357376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.358055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.358308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.358560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.358856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.358869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.360756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.361014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.361269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.947 [2024-07-24 18:31:58.361546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.362168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.362426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.362681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.362939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.363206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.363218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.365149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.365409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.365668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.365920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.366521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.366779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.367037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.367301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.367649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.367662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.369714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.369968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.370220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.370478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.371058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.371314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.371565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.371820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.372098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.372111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.374028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.374273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.374518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.374806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.375397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.375663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.375935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.376192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.376556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.376569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.378446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.378711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.378985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.379245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.379846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.380108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.380362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.380616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.380943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.380959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.382816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.383071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.383326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.383584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.384146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.384398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.384655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.384910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.385143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.385156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.387256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.387517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.387779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.388031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.388643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.388933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.389208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.389469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.389778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.389792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.391749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.392004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.392257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.392513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.393045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.393302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.393552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.393811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.394050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.394063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.395969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.396233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.396494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.396766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.397339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.397592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.397856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.398117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.398475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.398489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.400394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.400657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.400912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.401166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.948 [2024-07-24 18:31:58.401711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.401970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.402221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.402478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.402808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.402822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.404613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.404870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.405126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.405385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.405972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.406226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.406479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.406741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.406991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.407003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.409256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.409517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.409776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.410030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.410613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.411645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.411905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.412829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.413110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.413123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.414982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.415241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.415494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.415526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.416137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.416392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.416655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.416924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.417253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.417266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.420047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.420302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.420638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.420673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.421110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.421558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.421591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.422381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.422561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.422574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.424572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.425528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.425561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.426002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.426628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.426663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.426914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.427370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.427571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.427584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.429432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.429466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.430425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.431382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.431634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.432532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.432563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.432822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.433154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.433168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.435366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.435401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.436426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.437460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.438508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.438543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.439495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.439527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.439707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.439719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.441494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.441529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.442602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.442638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.443762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.443796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.444794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.444834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.445128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.445141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.446684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.446719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.446745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.446768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.447309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.447343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.448263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.949 [2024-07-24 18:31:58.448293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.448603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.448616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.449784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.449813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.449839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.449874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.450470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.450501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.451291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.451320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.451494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.451504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.452886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.452920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.452949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.452978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.453331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.453364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.453395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.453424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.453745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.453758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.454782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.454829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.454858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.454886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.455131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.455161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.455189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.455217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.455437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.455453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.456548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.456587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.456618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.456649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.457000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.457032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.457061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.457090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.457332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.457346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.458746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.458776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.458803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.458830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.459033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.459063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.459090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.459118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.459294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.459305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.460398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.460430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.460464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.460496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.460697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.460735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.460763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.460790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.461085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.461097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.462752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.462794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.462820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.462852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.463056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.463085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.463111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.463152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.463324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.463335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.464436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.464468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.464495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.464522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.464727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.464757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.464784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.464810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.465093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.465106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.466468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.466506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.466535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.466561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.466920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.466952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.466981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.467008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.950 [2024-07-24 18:31:58.467260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.467273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.468269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.468301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.468329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.468359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.468564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.468593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.468630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.468657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.468837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.468849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.470160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.470192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.470219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.470246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.470547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.470576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.470603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.470635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.470953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.470965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.471972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.472003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.472030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.472056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.472352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.472381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.472408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.472434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.472646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.472659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.473725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.473759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.473787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.473813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.474160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.474190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.474218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.474246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.474567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.474580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.475957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.475990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.476022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.476051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.476262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.476292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.476319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.476346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.476517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.476529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.477650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.477681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.477708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.477735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.477939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.477969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.477996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.478023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.478311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.478324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.480014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.480044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.480073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.480099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.480334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.480363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.480389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.480416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.480586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.480598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.481682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.481713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.481746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.481777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.481973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.951 [2024-07-24 18:31:58.482009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.482038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.482065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.482238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.482251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.483716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.483746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.483772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.483801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.484146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.484177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.484205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.484232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.484551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.484564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.485561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.485593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.485633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.485663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.485961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.485990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.486017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.486043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.486277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.486290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.487494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.487526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.487553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.487580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.487915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.487948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.487976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.488003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.488324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.488340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.489465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.489499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.489526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.489552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.489757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.489788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.489816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.489842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.490019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.490031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.491120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.491155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.491183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.491214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.491426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.491456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.491483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.491509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.491820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.491835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.493267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.493298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.493326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.493352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.493589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.493618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.493649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.493675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.493845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.493858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.494980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.495011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.495052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.495079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.495281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.495314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.495341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.495367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.495561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.495573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.497294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.497325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.497352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.497385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.497580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.497616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.497649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.497680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.497855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.497866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.499014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.499045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.499985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.500017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.500221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.500251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.952 [2024-07-24 18:31:58.500278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.500303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.500543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.500556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.501923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.501963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.502215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.502246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.502566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.502596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.503377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.503409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.503587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.503599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.504671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.505621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.505657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.505684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.505891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.506335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.506367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.506395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.506728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.506742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.509227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.509261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.509288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.509314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.510355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.510391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.511324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.511363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.511543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.511555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.512868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.512903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.512932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.512959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.513256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.514105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.514137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.514387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.514632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.514645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.518199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.518239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.519201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.519230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.519637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.519900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.519931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.520188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.520518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.520532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.521676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.522613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.523393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.523426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.524233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.524417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.524461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.525423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.525455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.525815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.525993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.526006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.528790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.529775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.530682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.531453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.531668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.531714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.532666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.532698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.533641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.533895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.533908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:49.953 [2024-07-24 18:31:58.536964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.217 [2024-07-24 18:31:58.537921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.217 [2024-07-24 18:31:58.538996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.217 [2024-07-24 18:31:58.540007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.217 [2024-07-24 18:31:58.540329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.541122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.542086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.543049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.543722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.543900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.543913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.546926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.547883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.548836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.549279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.549456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.550459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.551539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.552522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.552779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.553094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.553107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.555484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.556425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.557073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.558107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.558309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.559275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.560236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.560607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.561457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.561794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.561814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.564863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.565739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.566541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.567315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.567494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.568463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.569049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.569314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.569566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.569920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.569934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.572019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.572482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.573290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.574250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.574428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.575330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.576083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.576488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.576747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.576925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.576938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.579788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.580772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.581832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.582815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.582992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.583258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.583513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.583770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.584028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.584240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.584252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.586088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.586874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.587824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.588767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.589067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.590124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.590385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.590711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.591540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.591884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.591897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.595396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.596277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.596532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.596790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.597095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.597353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.598258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.599067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.600020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.600198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.600211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.602229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.602605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.603390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.603646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.603902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.604746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.605004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.605641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.218 [2024-07-24 18:31:58.606436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.606614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.606629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.610050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.610323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.610577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.610835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.611151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.612104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.613146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.614109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.615003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.615360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.615373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.616604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.617559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.617818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.618262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.618439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.618711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.618968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.619226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.619480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.619812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.619826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.621944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.622828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.623103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.623363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.623624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.623893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.624159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.624409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.624663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.624985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.624997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.626638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.627664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.627918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.628173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.628455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.628725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.628980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.629231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.629483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.629767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.629781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.631738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.631999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.632256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.632510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.632838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.633098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.633352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.633607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.633869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.634054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.634066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.635755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.636019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.636277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.636532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.636849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.637109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.637361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.637622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.637883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.638060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.638073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.640263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.640522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.640778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.641031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.641324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.641604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.641958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.642767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.643020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.643280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.643293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.645027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.645288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.645542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.645801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.646083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.646349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.646987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.647503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.647760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.647953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.647966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.219 [2024-07-24 18:31:58.650607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.650870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.651130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.651389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.651565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.651889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.652143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.653182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.653436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.653750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.653763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.655750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.656008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.656269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.656654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.656832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.657099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.657446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.658260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.658514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.658845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.658858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.661239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.661831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.662393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.662650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.662851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.663442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.663700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.663959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.664216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.664533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.664545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.666442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.667383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.667654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.667910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.668089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.668352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.668609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.668870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.669132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.669468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.669481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.671767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.672143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.672926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.673180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.673466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.673738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.673999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.674249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.674499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.674823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.674836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.676410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.677024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.677568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.677824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.678052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.678317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.678572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.678828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.679079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.679389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.679403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.681922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.682182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.682938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.683324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.683521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.684020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.684286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.684538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.684800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.685099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.685112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.686774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.686812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.687657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.687913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.688207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.688478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.688889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.689631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.689885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.690127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.690139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.692704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.692743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.693470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.693895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.694219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.220 [2024-07-24 18:31:58.695027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.695824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.695858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.696805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.696986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.696998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.698073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.699123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.699649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.699681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.699893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.700159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.700191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.700885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.701340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.701660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.701674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.705393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.706353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.707346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.707387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.707649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.707690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.708403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.708434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.708687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.708905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.708917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.711226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.712182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.712772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.712807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.712984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.713829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.713862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.714877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.714919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.715092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.715105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.718170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.718209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.719008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.719965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.720144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.720849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.720881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.721897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.721928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.722106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.722118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.723231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.723263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.723516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.723546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.723786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.724531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.724562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.724817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.724850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.725039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.725051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.727422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.727456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.727485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.727520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.727697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.728710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.728749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.729348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.729379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.729605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.729618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.731188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.731222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.731249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.731276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.731449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.731487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.731514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.731548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.731578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.731754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.731766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.734679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.734714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.734740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.734766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.734971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.735013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.735044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.735082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.735113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.735287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.221 [2024-07-24 18:31:58.735298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.736998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.737030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.737057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.737084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.737360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.737398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.737425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.737451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.737478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.737708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.737721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.740591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.740629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.740656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.740682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.740855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.740897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.740924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.740958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.740986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.741220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.741231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.742616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.742651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.742681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.742712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.743032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.743067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.743096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.743122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.743150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.743325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.743336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.745622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.745668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.745697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.745724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.745897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.745942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.745971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.745998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.746025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.746247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.746259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.747661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.747692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.747718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.747749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.748084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.748120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.748148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.748175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.748203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.748484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.748496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.751332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.751370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.751397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.751423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.751595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.751641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.751669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.751695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.751721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.751890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.751902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.753446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.753476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.753505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.753531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.753712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.222 [2024-07-24 18:31:58.753752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.753780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.753806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.753834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.754132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.754146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.757031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.757072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.757099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.757129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.757302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.757342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.757379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.757406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.757431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.757607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.757619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.758973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.759006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.759033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.759060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.759236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.759275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.759302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.759329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.759364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.759704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.759718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.762042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.762076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.762103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.762129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.762354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.762396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.762424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.762450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.762477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.762653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.762664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.763832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.763865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.763897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.763925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.764242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.764292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.764324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.764356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.764383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.764560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.764572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.766874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.766916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.766944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.766970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.767143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.767182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.767209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.767242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.767270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.767443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.767454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.768555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.768587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.768614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.768644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.768947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.768982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.769010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.769038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.769065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.769269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.769281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.771874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.771909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.771936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.771963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.772215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.772261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.772291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.772319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.772346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.772554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.772566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.773693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.773725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.773755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.773782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.774053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.774093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.774125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.774153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.774181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.223 [2024-07-24 18:31:58.774499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.774512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.777963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.777998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.778045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.778073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.778257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.778299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.778328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.778356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.778383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.778560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.778571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.779671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.779704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.779740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.779768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.779943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.779984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.780015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.780042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.780070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.780417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.780430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.782720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.782753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.782779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.782806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.782978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.783018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.783046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.783073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.783100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.783364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.783375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.784382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.784412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.784450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.784477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.784712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.784752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.784780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.784807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.784834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.785136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.785150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.786799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.786834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.786860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.786888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.787062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.787105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.787132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.787175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.787202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.787375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.787386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.788486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.788517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.788543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.788569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.788772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.788812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.788840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.788880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.788911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.789088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.789100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.790892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.790926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.790954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.790981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.791198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.791240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.791269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.791296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.791326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.791505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.791517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.792662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.793701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.793734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.793760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.793931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.793973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.794002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.794030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.794057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.794239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.794251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.796296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.797159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.797193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.224 [2024-07-24 18:31:58.797226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.797403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.797443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.797483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.798425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.798457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.798690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.798702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.800424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.800459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.800485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.800745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.801065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.801103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.802144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.802183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.802210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.802541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.802556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.805129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.805164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.805190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.805984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.806162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.807160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.807193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.807623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.807659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.807848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.225 [2024-07-24 18:31:58.807860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.809630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.809670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.809698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.810670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.810852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.810894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.811713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.811744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.812704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.812942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.812954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.816519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.817483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.817513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.817544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.817884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.817923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.818497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.818528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.819320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.819498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.819509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.822876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.823928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.823968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.824223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.824511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.824551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.825426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.825457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.825708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.825957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.825969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.829506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.830473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.830989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.832051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.832403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.832443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.832699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.832731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.833590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.833945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.833958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.837238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.838206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.839158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.839747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.839927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.840360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.840613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.841575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.841838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.842158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.842171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.846289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.847243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.848108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.848768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.848998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.849263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.849916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.850417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.850672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.850855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.850868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.854137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.855100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.855478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.856250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.856568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.856932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.857727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.857978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.858632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.858848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.858864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.488 [2024-07-24 18:31:58.862728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.863195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.864195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.864447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.864763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.865828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.866082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.866509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.867307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.867486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.867497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.871108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.871976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.872265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.872519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.872702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.873044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.873298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.874295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.875360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.875540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.875551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.879173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.879785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.880039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.880821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.881100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.881368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.881985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.882942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.883887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.884132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.884144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.886833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.887705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.887957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.888561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.888771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.889737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.890689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.891454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.892342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.892545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.892557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.896922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.897188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.897442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.898367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.898548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.899502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.900474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.900996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.901795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.901972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.901983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.905390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.905658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.906569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.907393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.907570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.908535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.908982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.909973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.910872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.911051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.911062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.913884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.914144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.914409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.914857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.915037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.915302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.915692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.916457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.916711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.917015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.917029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.919010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.919280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.919547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.920147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.920344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.920608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.921149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.921764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.922018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.922268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.922281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.924235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.924497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.924764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.925593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.489 [2024-07-24 18:31:58.925885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.926152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.926943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.927311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.927563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.927818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.927831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.929833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.930109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.930369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.931401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.931779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.932040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.933111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.933364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.933621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.933959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.933971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.936142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.936406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.936716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.937560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.937892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.938160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.939051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.939304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.939559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.939876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.939889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.942589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.942857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.943357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.944014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.944329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.944789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.945492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.945746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.946002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.946292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.946304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.949240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.949504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.950180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.950662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.950987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.951619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.952146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.952399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.952656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.952891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.952903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.956597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.956878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.957867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.958128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.958449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.959362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.959621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.959879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.960142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.960391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.960404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.964457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.964725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.965682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.965943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.966266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.967089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.967382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.967621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.967898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.968175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.968188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.972196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.972469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.973462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.973725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.974045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.974936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.975200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.975454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.975715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.975968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.975981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.979973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.980237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.981135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.981395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.981717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.982526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.982871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.983123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.490 [2024-07-24 18:31:58.983378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.983638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.983651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.987428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.987700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.988637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.988895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.989215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.990034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.990372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.990627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.990889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.991147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.991162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.995061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.995325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.996277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.996536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.996862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.997692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.998015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.998266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.998527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.998832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:58.998847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.002859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.003123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.004089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.004348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.004680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.005592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.005853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.006108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.006932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.007209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.007222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.009792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.010054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.010312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.010579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.010763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.011029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.011286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.012311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.012563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.012876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.012889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.014863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.014902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.015157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.015424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.015690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.016546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.016804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.017333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.017961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.018274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.018287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.021082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.021124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.021377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.022438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.022641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.023617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.024590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.024623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.025110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.025329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.025342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.028277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.029049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.029454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.029485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.029823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.030744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.030776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.031695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.032670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.032851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.032862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.036044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.036304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.037021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.037055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.037342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.037384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.037642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.037672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.038694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.038873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.038886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.042720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.491 [2024-07-24 18:31:59.043270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.043886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.043918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.044232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.044916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.044949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.045330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.045363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.045687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.045700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.049841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.049885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.050838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.051660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.051893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.052541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.052573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.052826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.052857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.053043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.053055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.055397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.055432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.055875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.055906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.056141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.057154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.057187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.058144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.058175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.058367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.058378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.061882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.061921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.061947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.061974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.062162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.063234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.063274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.064225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.064256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.064435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.064447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.067335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.067372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.067406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.067433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.067794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.067831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.067860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.067888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.067916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.068151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.068163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.070839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.070873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.070899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.070925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.071171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.071221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.071251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.071279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.071306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.071527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.071540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.074402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.074438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.074466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.074494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.074674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.074714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.074741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.074772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.074801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.075129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.492 [2024-07-24 18:31:59.075142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.493 [2024-07-24 18:31:59.077697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.493 [2024-07-24 18:31:59.077732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.493 [2024-07-24 18:31:59.077759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.493 [2024-07-24 18:31:59.077786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.493 [2024-07-24 18:31:59.077997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.493 [2024-07-24 18:31:59.078041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.493 [2024-07-24 18:31:59.078070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.493 [2024-07-24 18:31:59.078098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.493 [2024-07-24 18:31:59.078125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.493 [2024-07-24 18:31:59.078300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.493 [2024-07-24 18:31:59.078313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.080806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.080843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.080876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.080904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.081217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.081256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.081285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.081329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.081359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.081542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.081555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.083860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.083901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.083931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.083957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.084134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.084181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.084208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.084235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.756 [2024-07-24 18:31:59.084261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.084473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.084485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.088108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.088149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.088176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.088202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.088381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.088423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.088454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.088482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.088509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.088692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.088704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.091593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.091634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.091662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.091690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.092039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.092085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.092114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.092141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.092168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.092421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.092434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.094111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.094145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.094175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.094201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.094377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.094418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.094445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.094472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.094511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.094691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.094703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.097324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.097359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.097387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.097413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.097743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.097790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.097819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.097850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.097882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.098171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.098183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.101188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.101226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.101253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.101279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.101486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.101527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.101556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.101584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.101615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.101797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.101808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.104359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.104394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.104421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.104448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.104667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.104705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.104732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.104759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.104785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.105067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.105080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.107399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.107434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.107463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.107489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.107695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.107737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.107769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.107796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.107822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.107997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.108008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.110514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.110555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.110583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.110610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.110953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.110992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.111020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.111048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.111077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.111297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.111309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.113749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.757 [2024-07-24 18:31:59.113784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.113810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.113850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.114024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.114058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.114106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.114135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.114162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.114336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.114349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.116263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.116298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.116324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.116357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.116536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.116575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.116610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.116645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.116671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.116846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.116858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.119897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.119935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.119968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.119996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.120283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.120319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.120346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.120373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.120401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.120709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.120723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.123515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.123551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.123584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.123628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.123804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.123842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.123885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.123915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.123941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.124114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.124126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.126808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.126846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.126879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.126907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.127082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.127121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.127148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.127192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.127219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.127393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.127404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.130393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.130440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.130467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.130494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.130832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.130869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.130898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.130924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.130952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.131210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.131222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.133560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.133595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.133630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.133660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.133836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.133877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.133904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.133936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.133963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.134138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.134152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.136896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.136931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.136959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.136988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.137164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.137209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.137238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.137271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.137298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.137471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.137481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.140342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.140377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.140403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.140430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.140709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.140763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.758 [2024-07-24 18:31:59.140793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.140820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.140847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.141163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.141177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.143562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.143597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.143623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.143653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.143934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.143976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.144008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.144038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.144065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.144246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.144259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.146829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.147088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.147119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.147148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.147323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.147361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.147389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.147422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.147452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.147634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.147647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.150546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.151248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.151280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.151320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.151648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.151691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.151720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.151970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.152000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.152334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.152347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.155700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.155741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.155768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.156716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.156895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.156942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.157343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.157374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.157402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.157728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.157742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.160074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.160108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.160134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.160684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.160863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.161661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.161694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.162637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.162670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.162849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.162860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.164856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.164891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.164918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.165868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.166044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.166088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.166653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.166687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.167584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.167766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.167777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.170535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.171199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.171237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.171263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.171471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.171514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.172476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.172508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.173463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.173737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.173750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.177143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.177404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.177437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.178365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.178575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.178619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.179589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.179621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.180588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.180872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.759 [2024-07-24 18:31:59.180884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.184274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.184532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.185294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.186091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.186269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.186313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.187263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.187295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.187711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.187889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.187901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.190784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.191324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.192141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.193040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.193222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.193938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.194735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.195606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.196142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.196468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.196481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.200157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.200578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.201368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.202325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.202503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.203485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.204141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.204647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.204919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.205113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.205125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.208200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.209245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.210255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.211348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.211526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.211838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.212094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.212348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.212603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.212884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.212897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.216589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.217549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.218160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.218425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.218746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.219008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.219260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.220272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.221302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.221502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.221515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.224977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.225240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.225494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.225752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.226044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.226312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.226566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.226824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.227080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.227398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.227413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.229859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.230125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.230391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.230649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.230985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.231246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.231509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.231775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.232032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.232330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.232343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.234764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.235029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.235288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.235542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.235867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.236130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.236385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.236646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.236904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.237191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.237204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.239613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.239880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.240137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.760 [2024-07-24 18:31:59.240396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.240687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.240967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.241220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.241471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.241750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.242051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.242064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.244500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.244771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.245050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.245309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.245552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.245832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.246093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.246353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.246613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.246937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.246951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.249413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.249689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.249963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.250222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.250577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.250850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.251115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.251380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.251645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.251962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.251976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.254497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.254771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.255042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.255304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.255615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.255887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.256146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.256406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.256675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.256961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.256975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.259406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.259679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.259940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.260206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.260485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.260773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.261026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.261287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.261541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.261816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.261829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.263745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.264019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.264279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.264532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.264820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.265081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.265337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.265597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.265858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.266161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.266175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.268039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.268296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.268548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.268804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.269047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.269310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.269564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.269821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.270072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.270391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.270410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.272324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.272603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.761 [2024-07-24 18:31:59.272873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.273137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.273498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.273773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.274034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.274294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.275293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.275636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.275649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.277313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.277576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.277844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.278107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.278392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.278666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.278924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.279182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.279444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.279713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.279727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.281802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.282073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.282335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.282593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.282941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.283200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.283460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.283725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.283981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.284294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.284307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.286791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.287691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.288482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.289294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.289477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.290458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.291046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.291310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.291561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.291931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.291945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.294022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.294496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.295269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.296221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.296400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.297340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.297593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.297851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.298103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.298437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.298451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.300013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.301001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.302071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.303063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.303243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.303509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.303767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.304020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.304271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.304482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.304494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.306346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.307142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.308083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.309032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.309304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.309572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.309829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.310082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.310606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.310834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.310847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.312700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.313651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.314600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.315159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.315482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.315747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.316001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.316254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.317172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.317352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.317363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.319511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.320459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.321322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.321579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.321911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.322171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.322424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.762 [2024-07-24 18:31:59.323321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.324124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.324306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.324319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.326371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.327352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.327612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.327869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.328150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.328409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.329033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.329821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.330761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.330940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.330951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.333011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.333047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.333446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.333706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.334010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.334269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.334642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.335435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.336392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.336570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.336582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.338656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.338692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.339248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.339511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.339848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.340109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.340363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.340396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.341189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.341367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.341378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.342541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.343512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.344464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.344496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.344711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.344976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.345008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.345256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.345506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.345846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.345860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:50.763 [2024-07-24 18:31:59.347422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.348223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.349200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.349234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.349419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.349464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.350092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.350136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.350387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.350727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.350741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.353092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.354122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.354700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.354733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.354962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.355930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.355963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.356909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.356940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.357170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.357182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.360082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.360125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.361190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.362171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.362353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.362885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.362920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.363729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.363759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.363935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.363946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.365361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.365397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.365655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.365686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.365990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.366957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.366997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.367943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.367974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.368148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.368159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.369295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.369326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.369352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.369378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.369549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.370138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.370178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.370429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.370458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.370759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.370773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.372200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.372231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.372258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.372283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.372457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.372498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.372526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.372553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.372580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.372857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.372869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.373887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.373918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.373949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.373978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.374260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.374296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.374324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.374350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.374379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.374709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.374722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.376178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.376209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.376235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.376272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.376446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.376483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.376517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.376547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.376573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.376748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.028 [2024-07-24 18:31:59.376759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.377811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.377841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.377867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.377892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.378145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.378187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.378220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.378248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.378275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.378598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.378611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.380137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.380171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.380200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.380226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.380398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.380439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.380467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.380492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.380520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.380698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.380710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.381806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.381837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.381867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.381897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.382071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.382115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.382144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.382172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.382199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.382487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.382500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.384185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.384216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.384248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.384275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.384450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.384488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.384516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.384556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.384584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.384774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.384792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.385910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.385941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.385967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.385993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.386162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.386204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.386231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.386258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.386285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.386549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.386560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.388471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.388502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.388531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.388557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.388764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.388805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.388832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.388859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.388886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.389059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.389071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.390173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.390204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.390236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.390265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.390439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.390482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.390513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.390540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.390570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.390757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.390776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.392607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.392644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.392675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.392703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.392879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.392920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.392948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.392980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.393007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.393184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.393207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.394304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.394334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.394361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.029 [2024-07-24 18:31:59.394387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.394561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.394603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.394635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.394662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.394689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.394863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.394875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.396520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.396551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.396581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.396608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.396867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.396910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.396937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.396965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.396991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.397213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.397226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.398289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.398320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.398351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.398379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.398551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.398590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.398620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.398661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.398689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.398858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.398869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.400441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.400472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.400499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.400526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.400847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.400884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.400913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.400943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.400970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.401142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.401154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.402242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.402272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.402299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.402328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.402540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.402582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.402609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.402641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.402668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.402843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.402854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.404367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.404403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.404433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.404460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.404802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.404838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.404868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.404896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.404923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.405153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.405165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.406204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.406237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.406264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.406294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.406468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.406505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.406540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.406572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.406603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.406782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.406794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.408162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.408195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.408222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.408250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.408547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.408583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.408612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.408644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.408673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.408983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.408997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.410006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.410037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.410070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.410099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.410328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.410366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.410392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.410419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.030 [2024-07-24 18:31:59.410446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.410652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.410663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.411877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.411909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.411940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.411967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.412276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.412312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.412340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.412367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.412395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.412727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.412741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.413832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.413863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.413889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.413915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.414153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.414198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.414241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.414268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.414294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.414471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.414483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.415659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.415691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.415718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.415746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.416078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.416114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.416143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.416170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.416198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.416526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.416540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.417742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.417773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.417806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.417836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.418009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.418045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.418083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.418112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.418140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.418322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.418333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.419413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.419679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.419710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.419739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.420060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.420101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.420130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.420158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.420187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.420521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.420535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.421645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.422485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.422516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.422544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.422726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.422768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.422796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.423681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.423713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.423889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.423900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.425653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.425690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.425718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.426169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.426350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.426396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.427345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.427379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.427405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.427634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.427647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.428774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.428807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.428834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.429085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.429383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.429650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.429682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.430238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.430269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.430497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.430510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.431577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.031 [2024-07-24 18:31:59.431608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.431639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.432450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.432634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.432678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.433631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.433663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.433919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.434252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.434265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.435809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.436769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.436803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.436831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.437003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.437046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.437696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.437730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.438724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.438902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.438914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.440404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.440667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.440701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.441278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.441487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.441531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.442491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.442525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.443508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.443739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.443752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.445327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.445594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.445851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.446104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.446421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.446464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.447321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.447354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.448323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.448500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.448517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.450632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.451547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.451806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.452060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.452340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.452600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.452864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.453121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.453380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.453705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.453719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.455699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.455953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.456209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.456465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.456739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.457005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.457257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.457509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.457765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.458061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.458074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.459991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.460249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.460506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.460767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.461094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.461353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.461605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.461866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.462127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.462404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.462417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.032 [2024-07-24 18:31:59.464338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.464596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.464854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.465108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.465419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.465689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.465949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.466203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.466454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.466814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.466828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.468737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.468995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.469251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.469510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.469803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.470064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.470316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.470569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.470833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.471086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.471100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.473152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.473415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.473675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.473927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.474278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.474543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.474808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.475065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.475319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.475615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.475633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.477568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.477825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.478077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.478330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.478639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.478913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.479165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.479416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.479672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.479907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.479920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.481795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.482056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.482314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.482571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.482909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.483171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.483425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.483691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.483949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.484287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.484302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.486204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.486463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.486723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.486980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.487263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.487533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.487800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.488053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.488307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.488640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.488654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.490512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.490770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.491025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.491284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.491608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.491875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.492140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.492411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.492681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.492973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.492986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.495161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.495429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.495688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.495941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.496261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.496525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.496795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.497054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.497305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.497599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.497612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.499436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.499697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.499949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.500616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.500847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.033 [2024-07-24 18:31:59.501429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.501998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.502258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.502510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.502863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.502877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.504723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.504982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.505239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.505501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.505768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.506032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.506284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.506536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.506799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.507055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.507068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.509013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.509275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.509531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.509789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.510124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.510382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.510644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.510902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.511161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.511494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.511508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.514017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.514965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.515412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.516204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.516383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.517395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.518302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.518544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.518810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.519129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.519142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.521371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.521969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.523005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.524024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.524207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.525189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.525492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.525756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.526016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.526363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.526376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.528532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.529142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.529951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.530915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.531097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.531930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.532197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.532454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.532722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.533039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.533053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.534567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.535520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.536545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.537517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.537702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.537975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.538235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.538494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.538768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.539012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.539025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.540723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.541515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.542497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.543456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.543769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.544036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.544289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.544539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.544913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.545092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.545105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.547005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.547963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.548923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.549716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.549983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.550249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.550504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.550762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.551637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.034 [2024-07-24 18:31:59.551848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.551868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.553703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.554656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.555603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.555893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.556239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.556500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.556759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.557275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.558070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.558251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.558263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.560253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.561201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.561824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.562083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.562412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.562682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.562939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.563949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.564861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.565040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.565051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.567090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.568074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.568339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.568592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.568872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.569134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.569787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.570578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.571536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.571720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.571731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.573773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.574269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.574528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.574789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.575162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.575423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.576423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.577479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.578517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.578699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.578712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.580843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.580876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.581127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.581381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.581682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.581942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.582821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.583609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.584558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.584753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.584768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.586802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.586840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.587097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.587351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.587604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.587875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.588439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.588473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.589263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.589443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.589454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.590507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.591469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.592417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.592449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.592734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.593010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.593043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.593299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.593557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.593837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.593850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.595441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.596239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.597190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.597223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.597400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.597443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.597986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.598030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.598287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.598621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.598639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.601025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.601994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.035 [2024-07-24 18:31:59.602501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.602534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.602751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.603720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.603753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.604701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.604732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.604974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.604987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.607576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.607610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.608466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.609394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.609573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.610079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.610114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.610949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.610981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.611158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.611169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.612526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.612557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.612822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.612857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.613175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.613951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.613984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.614798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.614829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.615004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.615015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.616136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.616170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.616206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.616232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.616405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.617444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.617476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.617746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.617779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.618114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.036 [2024-07-24 18:31:59.618128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.300 [2024-07-24 18:31:59.619574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.619606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.619637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.619665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.619859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.619901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.619929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.619973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.620002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.620176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.620189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.621306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.621336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.621362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.621391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.621609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.621654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.621682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.621738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.621769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.622108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.622122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.623638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.623670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.623700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.623727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.623899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.623944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.623975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.624002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.624028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.624203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.624214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.625338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.625370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.625396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.625422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.625592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.625638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.625666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.625692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.625721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.626027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.626040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.627780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.627811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.627841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.627867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.628114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.628156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.628183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.628210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.628236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.628407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.628418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.629529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.629560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.629606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.629640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.629812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.629853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.629884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.629910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.629937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.630118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.630130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.631831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.631863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.631890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.631916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.632088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.632125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.632152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.632185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.632215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.632397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.632409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.633591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.633622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.633653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.633680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.633858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.633900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.633928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.633955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.633982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.634159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.634170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.635814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.635846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.635888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.301 [2024-07-24 18:31:59.635916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.636209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.636247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.636275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.636301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.636328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.636549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.636561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.637635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.637665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.637691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.637717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.637924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.637969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.637997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.638026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.638060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.638235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.638246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.639730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.639762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.639788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.639815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.640131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.640167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.640195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.640223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.640251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.640450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.640463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.641513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.641545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.641572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.641601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.641777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.641817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.641845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.641889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.641917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.642094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.642105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.643496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.643529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.643559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.643586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.643881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.643916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.643947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.643974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.644002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.644323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.644336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.645340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.645370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.645396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.645428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.645676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.645716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.645744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.645771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.645797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.646004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.646015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.647178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.647210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.647238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.647265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.647590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.647631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.647670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.647708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.647737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.648082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.648096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.649214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.649252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.649284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.649311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.649483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.649525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.649553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.649580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.649606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.649785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.649797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.650898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.650942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.650972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.650999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.651340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.302 [2024-07-24 18:31:59.651377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.651405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.651433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.651461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.651737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.651751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.653076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.653106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.653132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.653158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.653328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.653372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.653399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.653426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.653452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.653718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.653733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.654727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.654763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.654791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.654818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.655059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.655102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.655131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.655158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.655186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.655505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.655520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.656853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.656884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.656910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.656936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.657109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.657153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.657180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.657207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.657238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.657508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.657520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.658662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.658693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.658722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.658748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.659076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.659113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.659141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.659169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.659207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.659548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.659562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.660739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.660770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.660800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.660835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.661007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.661046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.661081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.661109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.661136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.661331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.661342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.662394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.662426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.662465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.662503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.662881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.662919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.662948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.662975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.663003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.663279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.663292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.664705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.664738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.664765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.664791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.664961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.665006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.665033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.665060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.665088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.665317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.665328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.666332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.666768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.303 [2024-07-24 18:31:59.666801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.666829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.667171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.667208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.667237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.667264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.667293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.667556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.667569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.668878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.669834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.669865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.669891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.670133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.670177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.670209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.671088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.671121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.671303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.671314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.672905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.672942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.672969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.673224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.673446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.673486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.674278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.674311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.674336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.674513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.674524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.675649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.675680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.675714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.676673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.676854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.677116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.677147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.677397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.677427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.677723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.677737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.679044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.679074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.679100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.680049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.680289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.680336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.681350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.681388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.682421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.682604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.682615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.684259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.684525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.684558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.684586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.684881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.684941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.685201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.685236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.685485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.685812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.685826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.687711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.687971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.688005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.688264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.688586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.688631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.688893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.688934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.689192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.689521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.689535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.691398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.691663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.691919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.692175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.692486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.692537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.692805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.692846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.693097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.693428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.693441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.695268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.695527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.695788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.696044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.696277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.696543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.696803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.304 [2024-07-24 18:31:59.697057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.697309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.697645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.697661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.699568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.699830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.700089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.700348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.700699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.700961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.701224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.701479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.701757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.702049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.702062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.704212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.704474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.704731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.704983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.705326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.705587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.705856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.706119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.706373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.706662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.706675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.708681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.708938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.709190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.709443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.709724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.709991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.710245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.710499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.710759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.711029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.711042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.712945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.713207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.713465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.713726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.714056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.714320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.714575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.714834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.715096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.715415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.715429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.717369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.717638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.717894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.718149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.718480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.718748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.719006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.719262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.719515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.719877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.719891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.721678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.721937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.722193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.722452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.722708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.722975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.723229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.723481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.723743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.305 [2024-07-24 18:31:59.724013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.724027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.726035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.726306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.726572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.726832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.727116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.727375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.727635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.727895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.728152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.728493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.728507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.730375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.730640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.730897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.731150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.731419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.731699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.731958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.732214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.732471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.732788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.732802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.734246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.734503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.734760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.735012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.735284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.735550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.735815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.736069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.736334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.736660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.736674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.738536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.738797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.739056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.739312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.739659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.739923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.740189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.740450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.740712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.740982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.740997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.743215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.743485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.743756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.744022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.744351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.744611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.745511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.746514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.747467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.747650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.747662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.749655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.749919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.750174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.750428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.750745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.751398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.752189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.753139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.754091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.754360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.754373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.755775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.756033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.756288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.756541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.756837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.757725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.758694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.759647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.760476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.760702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.760715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.761987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.762244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.762499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.762758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.762940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.763748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.764694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.765651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.306 [2024-07-24 18:31:59.766081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.766259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.766272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.767673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.767933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.768188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.768865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.769099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.770074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.771030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.771725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.772702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.772930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.772942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.774410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.774709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.774977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.775926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.776114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.777087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.778093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.778651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.779454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.779645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.779657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.781326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.781592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.782510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.783351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.783536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.784528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.784987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.785859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.786819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.787006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.787017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.788780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.789304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.790103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.791066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.791252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.792157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.792984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.793796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.794794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.794975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.794986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.796892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.797884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.798929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.799951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.800132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.800621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.801409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.802370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.803333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.803537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.803549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.805812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.806610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.807570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.808519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.808806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.809787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.810672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.811616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.812607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.812910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.812922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.815566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.816559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.817520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.818330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.818541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.819332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.820284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.821236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.821788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.822083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.822096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.824410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.825355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.826297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.826721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.826900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.827792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.828746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.829755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.830013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.830337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.830350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.832925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.833889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.307 [2024-07-24 18:31:59.834777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.835564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.835782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.836755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.837713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.838306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.838574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.838898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.838912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.841273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.841310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.842264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.842706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.842887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.843894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.844976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.845965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.846223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.846543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.846557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.848989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.849025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.849975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.850531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.850716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.851509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.852466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.852498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.853483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.853799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.853812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.855607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.856408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.857353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.857385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.857561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.858550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.858583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.859401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.860205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.860384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.860395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.862143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.862401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.863374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.863419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.863600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.863642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.864673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.864710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.865531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.865761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.865774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.867048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.867305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.867561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.867595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.867917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.868808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.868840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.869726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.869756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.869932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.869943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.872040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.872075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.872848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.873104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.873418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.873686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.873723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.873974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.874004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.874184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.874196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.875306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.875340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.876187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.876220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.876396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.877456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.877490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.878341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.878371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.878617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.878633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.308 [2024-07-24 18:31:59.880266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.880297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.880331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.880358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.880536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.881495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.881527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.882572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.882610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.882901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.882915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.883932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.883964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.883991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.884017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.884294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.884339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.884369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.884396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.884425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.884753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.884766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.886273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.886312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.886342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.886369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.886544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.886586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.886614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.886646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.886673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.886930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.886942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.887982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.888013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.888040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.888076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.888373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.888420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.888449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.888476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.888504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.888831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.888845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.890319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.890349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.890375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.890401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.890572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.890613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.890645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.890672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.890698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.890875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.309 [2024-07-24 18:31:59.890891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.573 [2024-07-24 18:31:59.892027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.892068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.892098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.892124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.892297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.892337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.892365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.892393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.892421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.892743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.892772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.894344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.894383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.894422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.894450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.894642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.894698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.894729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.894755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.894781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.894955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.894966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.896084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.896116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.896143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.896170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.896341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.896383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.896411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.896443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.896477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.896729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.896742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.898415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.898446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.898473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.898500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.898714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.898757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.898785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.898811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.898839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.899011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.899023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.900127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.900163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.900193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.900220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.900393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.900436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.900463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.900489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.900516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.900767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.900780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.902712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.902746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.902773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.902798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.903000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.903048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.903078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.903105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.903131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.903305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.903316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.904371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.904403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.904430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.904456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.904632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.904673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.904701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.904729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.904765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.904939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.904951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.906694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.906728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.906756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.906783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.906961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.906998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.907026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.907052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.907085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.907263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.907274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.908366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.908405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.574 [2024-07-24 18:31:59.908435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.908465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.908643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.908684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.908712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.908739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.908766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.908940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.908952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.910564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.910598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.910631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.910658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.910954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.911001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.911040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.911067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.911093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.911266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.911278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.912378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.912408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.912436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.912462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.912641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.912684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.912712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.912739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.912765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.913039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.913052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.914929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.914963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.914989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.915015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.915249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.915290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.915319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.915346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.915372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.915542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.915553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.916630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.916662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.916692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.916733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.916909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.916946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.916988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.917016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.917043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.917218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.917231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.918957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.918988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.919016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.919045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.919219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.919258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.919292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.919320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.919351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.919535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.919546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.920704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.920736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.920763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.920790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.920960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.921002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.921030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.921057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.921084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.921257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.921269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.922973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.923006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.923033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.923060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.923309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.923347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.923374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.923401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.923427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.923671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.923683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.924754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.924785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.924821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.924848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.925025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.925063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.575 [2024-07-24 18:31:59.925096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.925134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.925162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.925333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.925346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.926886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.926917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.926945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.926972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.927275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.927313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.927342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.927373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.927400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.927575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.927587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.928670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.928701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.928728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.928755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.928967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.929009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.929037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.929064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.929091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.929263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.929275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.930927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.930970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.930998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.931025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.931357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.931394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.931422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.931453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.931480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.931697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.931709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.932734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.933424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.933456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.933483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.933693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.933738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.933767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.933793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.933821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.933997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.934010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.935710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.935968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.936000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.936029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.936287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.936338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.936368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.936622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.936660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.936991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.937004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.938905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.938942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.938985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.939242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.939563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.939603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.939874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.939909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.939937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.940249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.940262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.941937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.941969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.941995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.942245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.942520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.942800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.942841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.943094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.943127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.943456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.943469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.945115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.945147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.945174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.945438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.945760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.945803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.946058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.946092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.946348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.946621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.576 [2024-07-24 18:31:59.946639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.948384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.948649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.948692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.948734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.949064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.949112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.949370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.949403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.949659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.949915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.949927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.951775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.952037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.952071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.952322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.952569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.952616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.952875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.952908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.953172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.953498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.953512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.955476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.955737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.955996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.956251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.956491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.956538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.956802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.956846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.957099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.957394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.957407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.959456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.959719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.959977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.960231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.960533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.960814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.961072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.961327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.961580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.961864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.961877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.963788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.964048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.964305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.965132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.965409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.965679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.966395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.966839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.967095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.967348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.967361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.969696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.970068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.970323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.970577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.970861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.971703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.972005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.972259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.973292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.973633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.973645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.975356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.975611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.976680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.976940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.977271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.977532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.977796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.978845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.979103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.979435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.979449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.981224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.982223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.982480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.982821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.982998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.983266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.983524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.983796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.984187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.984365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.984378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.986250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.986515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.986895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.577 [2024-07-24 18:31:59.987665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.987982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.988268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.989146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.989402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.989660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.990030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.990043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.992433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.992695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.992951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.993211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.993497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.994411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.994670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.995100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.995824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.996151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.996164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.997797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.998292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.998949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.999204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:31:59.999475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.000561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.000844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.001739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.002499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.002747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.002761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.004564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.004836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.005597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.006020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.006346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.006967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.007542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.007808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.008072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.008320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.008333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.010283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.010661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.010980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.011924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.012324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.012614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.013543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.013862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.014173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.014485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.014504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.016785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.017106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.017415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.017732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.018076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.018418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.018737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.019049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.019357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.019589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.019610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.022101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.023244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.023550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.023852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.024196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.024504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.025453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.026622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.027785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.028095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.028114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.578 [2024-07-24 18:32:00.029825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.030146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.030606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.031610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.031844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.033012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.033751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.034740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.035916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.036177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.036197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.039175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.040274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.041098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.042111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.042344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.043515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.044018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.044321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.044617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.045010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.045031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.047406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.048526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.049668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.049985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.050346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.050675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.050984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.051973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.053100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.053324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.053341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.055152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.055459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.055764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.056064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.056283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.057376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.058506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.059098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.060087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.060313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.060331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.062630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.063669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.064794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.065545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.065846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.066883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.067811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.068078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.068344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.068609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.068623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.070914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.071461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.072456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.073538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.073725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.074704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.074970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.075230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.075491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.075826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.075844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.077959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.078690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.079480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.080431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.080611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.081298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.081562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.081825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.082100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.082422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.082435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.083940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.084775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.085761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.086732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.086915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.087191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.087453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.087718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.087980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.088190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.088203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.090265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.091152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.092119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.093158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.579 [2024-07-24 18:32:00.093538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.093818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.094082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.094343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.094851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.095070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.095083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.097021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.098006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.098978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.099440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.099768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.100050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.100306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.100563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.101600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.101787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.101802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.103971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.104937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.105758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.106024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.106339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.106616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.106881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.107742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.108549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.108738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.108751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.110939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.112029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.112294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.112547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.112831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.113100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.113746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.114550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.115538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.115726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.115739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.117856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.117892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.118154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.118416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.118701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.118972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.119457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.120297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.121274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.121459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.121472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.123610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.123651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.124025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.124287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.124602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.124875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.125247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.125281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.126119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.126307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.126320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.127429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.128426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.129399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.129448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.129745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.130019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.130051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.130308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.130575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.130860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.130872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.132529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.133350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.134315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.134347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.134525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.134570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.135049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.135083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.135340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.135652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.135666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.138191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.139157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.139927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.139961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.140180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.141153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.141186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.142165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.142197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.580 [2024-07-24 18:32:00.142463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.142476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.145449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.145491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.146458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.147438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.147621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.148283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.148317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.149129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.149162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.149343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.149356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.151112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.151144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.151398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.151431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.151732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.152594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.152656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.153621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.153656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.153837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.153850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.155060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.155092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.155119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.155147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.155324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.155668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.155701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.155951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.155982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.156246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.156259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.157643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.157674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.157701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.157728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.157906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.157951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.157979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.158006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.158034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.158309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.158320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.159417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.159453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.159482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.159510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.159827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.159866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.159895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.159923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.159951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.160258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.160271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.161960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.161998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.162027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.162053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.162223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.162266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.162293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.162320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.162347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.162578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.162590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.163671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.163703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.163731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.163765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.164034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.164072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.164101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.164128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.164156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.164471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.581 [2024-07-24 18:32:00.164488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.165970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.166004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.166034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.166062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.166238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.166280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.166309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.166335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.166370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.166551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.166563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.167773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.167805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.844 [2024-07-24 18:32:00.167831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.167856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.168072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.168133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.168161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.168194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.168222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.168567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.168582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.170137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.170174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.170202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.170229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.170413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.170457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.170485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.170516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.170543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.170778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.170792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.171989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.172022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.172050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.172077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.172386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.172425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.172454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.172484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.172512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.172788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.172801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.174212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.174243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.174272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.174299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.174477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.174518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.174547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.174574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.174601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.174970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.174983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.176046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.176080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.176109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.176136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.176449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.176496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.176526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.176554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.176583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.176904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.176934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.178526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.178567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.178594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.178621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.178805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.178849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.178877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.178904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.178931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.179149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.179162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.180253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.180285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.180319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.180349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.180616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.180657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.180685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.180712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.180741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.181053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.181066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.182578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.182610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.182648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.182676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.182855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.182896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.182923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.182956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.182987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.183167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.183178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.186098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.186136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.186164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.186192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.845 [2024-07-24 18:32:00.186502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.186555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.186594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.186623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.186654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.186998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.187011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.189610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.189648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.189675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.189708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.189884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.189922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.189949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.189977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.190020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.190213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.190225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.192842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.192878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.192908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.192935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.193244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.193289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.193318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.193359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.193386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.193576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.193589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.196244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.196280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.196310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.196337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.196512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.196555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.196583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.196611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.196642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.196926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.196938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.199070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.199106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.199134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.199161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.199339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.199386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.199414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.199442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.199472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.199720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.199734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.203084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.203121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.203148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.203176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.203479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.203518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.203547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.203575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.203603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.203907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.203922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.206253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.206300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.206327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.206366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.206644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.206709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.206760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.206800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.206830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.207107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.207120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.209314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.209351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.209382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.209409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.209735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.209772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.209815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.209842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.209869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.210199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.210213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.212466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.212502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.212532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.212560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.212791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.212841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.212871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.846 [2024-07-24 18:32:00.212900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.212928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.213205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.213217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.215427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.215464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.215492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.215519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.215842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.215881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.215910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.215939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.215967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.216253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.216266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.218436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.218472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.218500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.218535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.218880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.218918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.218959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.218988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.219030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.219310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.219324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.221635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.221916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.221950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.221978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.222295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.222334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.222363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.222392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.222421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.222734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.222757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.224921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.225183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.225251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.225294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.225636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.225690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.225733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.225997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.226029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.226353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.226366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.228905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.228945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.228973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.229222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.229491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.229537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.229822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.229858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.229887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.230229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.230243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.232575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.232610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.232644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.232894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.233212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.233483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.233525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.233793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.233829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.234131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.234144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.235947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.235980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.236007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.236296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.236617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.236664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.236924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.236955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.237214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.237549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.237565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.239353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.239646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.239678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.239707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.239987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.240034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.240297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.240332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.240591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.240919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.240933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.847 [2024-07-24 18:32:00.242796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.243073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.243107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.243367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.243699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.243739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.244004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.244039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.244699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.346674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.346958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.348990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.349928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.349975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.350942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.351893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.352074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.352126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.352367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.352422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.352668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.352708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.352947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.353206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.353389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.353402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.353412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.355511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.356562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.357669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.358715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.358969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.359253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.359513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.359777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.360479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.360733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.360746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.360756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.362633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.363597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.364545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.365011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.365393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.365678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.365940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.366312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.367137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.367321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.367337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.367348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.369397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.370335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.371083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.371363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.371694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.371963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.372227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.373174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.374028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.374209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.374222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.374231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.376266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.377272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.377541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.377808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.378112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.378380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.379054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.379827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.380773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.380958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.380971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.380981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.383056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.383571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.383829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.384085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.384427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.384693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.385638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.386688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.387680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.387876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.387889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.387899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.390004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.390262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.390515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.390771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.391089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.848 [2024-07-24 18:32:00.391913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.392733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.393679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.394624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.394900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.394913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.394924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.396314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.396577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.396846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.397101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.397367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.398181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.399145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.400096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.400907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.401159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.401172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.401185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.402509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.402776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.402823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.403077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.403464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.403649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.403663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.404644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.405663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.405703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.406331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.407129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.407311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.407323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.407332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.407343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.409025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.409062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.409323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.409356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.409537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.409549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.410421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.410458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.411495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.411535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.411816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.411829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.411839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.411852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.413372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.413409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.413665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.413699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.414011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.414025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.414897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.414934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.415959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.416001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.416275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.416288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.416298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.416308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.418098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.418135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.418394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.418425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.418611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.418629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.419089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.419122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.419373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.419406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.849 [2024-07-24 18:32:00.419583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.419595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.419605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.419614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.421662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.421700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.421967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.421999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.422338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.422351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.422616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.422652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.422911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.422943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.423126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.423138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.423148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.423159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.425045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.425084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.425348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.425394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.425749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.425763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.426028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.426060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.426318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.426349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.426580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.426593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.426603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.426612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.428386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.428425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.428931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.428963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.429303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.429320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.429581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.429612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.429869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.429901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.430192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.430205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.430214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.430224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.431597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.431638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.431902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.431948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.432272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.432286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.432551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.432598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.432898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.432933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.433115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.433129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.433139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.433149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.434950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.434988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.435249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.435284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.435552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.435565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.435842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.435882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.436143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.436176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.436524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:51.850 [2024-07-24 18:32:00.436537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.436548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.436560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.438519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.438556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.438822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.438854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.439093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.439107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.439378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.439414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.439678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.439711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.440043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.440056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.440067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.440078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.442182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.442220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.442477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.442508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.442858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.442872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.443166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.443209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.443471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.443507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.443846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.443859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.443870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.443882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.445882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.445920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.446188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.446220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.446572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.446585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.446860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.446895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.447156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.447199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.447542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.447555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.447564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.447575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.449560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.449599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.449865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.449902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.450247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.450260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.450526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.450558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.450840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.450871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.451179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.451192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.451206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.451216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.453213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.453250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.453504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.453539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.453802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.453815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.454097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.454130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.454388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.454419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.454747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.454761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.454771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.454782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.456635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.456681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.456932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.456962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.114 [2024-07-24 18:32:00.457260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.457273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.457562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.457606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.457881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.457915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.458227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.458241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.458252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.458265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.460133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.460176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.460428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.460459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.460825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.460838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.461103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.461146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.461408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.461454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.461734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.461748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.461759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.461770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.463915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.463976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.464235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.464266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.464586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.464602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.464872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.464904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.465163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.465194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.465465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.465479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.465490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.465502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.467483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.467527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.467799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.467846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.468139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.468152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.468428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.468462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.468719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.468749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.469069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.469082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.469093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.469104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.471041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.471078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.471338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.471378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.471764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.471777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.472048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.472084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.472345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.472377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.472653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.472667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.472678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.472689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.474578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.474615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.474870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.474902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.475236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.475250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.475520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.475555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.475828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.475865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.476231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.476244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.476257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.476268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.478186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.478225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.478490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.478521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.478800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.478813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.479070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.479104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.479358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.479393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.479676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.115 [2024-07-24 18:32:00.479689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.479699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.479711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.481885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.481929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.482191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.482225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.482573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.482587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.482856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.482889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.483149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.483183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.483408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.483421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.483431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.483441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.485252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.485290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.485541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.485575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.485758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.485771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.486541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.486576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.486871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.486908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.487091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.487104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.487115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.487125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.489256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.489294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.490023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.490060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.490239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.490252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.490935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.490972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.491452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.491484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.491848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.491866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.491877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.491888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.493650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.493689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.493719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.493746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.493927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.493940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.494212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.494249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.494278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.494306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.494636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.494651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.494662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.494673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.496398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.496429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.496456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.496482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.496687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.496699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.496741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.496769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.496797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.496823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.497061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.497073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.497083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.497094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.499790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.500896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.500942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.500971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.500999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.501351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.501364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.116 [2024-07-24 18:32:00.501401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.501431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.501460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.501491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.501799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.501814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.501824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.501835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.503141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.503172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.503475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.503517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.503723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.503736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.562231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.565281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.565330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.566373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.566617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.567653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.567692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.567733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.568508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.568724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.571446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.571492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.571781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.571816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.571854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.572620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.572813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.572826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.572868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.572907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.573844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.573877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.573916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.574412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.574607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.574620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.574634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.575781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.576045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.576077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.576336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.576681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.576695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.576732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.576779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.577523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.577558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.578372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.578556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.578569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.578580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.578591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.581774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.582049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.582081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.582364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.582653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.582667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.582703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.582961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.582993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.583963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.584199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.584211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.584221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.584232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.585353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.586449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.586490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.587434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.587612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.587629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.587672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.587950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.587981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.588239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.588543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.588556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.588567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.588579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.590927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.591741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.591776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.592714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.592896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.592909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.592952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.593766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.117 [2024-07-24 18:32:00.593800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.594052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.594398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.594412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.594423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.594437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.596086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.597138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.597179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.598103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.598333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.598345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.598383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.599165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.599199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.600133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.600317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.600330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.600340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.600350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.602847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.603925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.603966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.604895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.605078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.605091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.605134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.605825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.605857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.606638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.606826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.606839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.606849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.606859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.608347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.608614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.608652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.609154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.609399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.609411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.609454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.610392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.610426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.611365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.611569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.611582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.611593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.611604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.614852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.615122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.615154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.615413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.615698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.615711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.615749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.616534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.616566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.617509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.617705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.617718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.617728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.617739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.618865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.619822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.619856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.620112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.620450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.620464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.620502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.620779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.620811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.621071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.621279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.118 [2024-07-24 18:32:00.621293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.621304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.621315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.623729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.624677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.624712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.625786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.626053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.626065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.626101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.626355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.626385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.626663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.626991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.627006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.627018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.627029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.628073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.628650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.629641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.630681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.630865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.630877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.630920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.631940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.632201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.632456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.632745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.632758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.632772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.632783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.635938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.636730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.637674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.638617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.638893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.638916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.639188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.639449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.639715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.640265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.640508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.640521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.640531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.640541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.642441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.643404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.644362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.644973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.645339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.645354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.645614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.645899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.646161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.647067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.647251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.647264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.647274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.647285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.651179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.651449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.651713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.651982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.652317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.652330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.653187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.653978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.654918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.655306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.655487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.655500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.655510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.655521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.656906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.657165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.657433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.657468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.657697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.657710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.658502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.659465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.660093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.660130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.660308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.660321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.660331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.660341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.663256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.663299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.663956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.663993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.664219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.664232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.119 [2024-07-24 18:32:00.665113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.665160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.665816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.665849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.666075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.666087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.666098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.666108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.667944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.667982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.668454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.668496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.668707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.668721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.669716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.669752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.670563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.670595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.670785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.670798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.670808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.670818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.673548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.673589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.674445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.674488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.674675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.674694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.675770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.675805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.676294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.676327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.676508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.676520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.676530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.676540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.678402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.678442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.679310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.679356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.679532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.679544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.680006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.680043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.680789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.680825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.681031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.681046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.681057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.681067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.683272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.683314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.684336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.684390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.684567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.684579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.684846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.684878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.685132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.685163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.685469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.685482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.685492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.685502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.687814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.688302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.688349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.688600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.688926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.688940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.689198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.689230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.689529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.690457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.690644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.690657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.690667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.690679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.693743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.693784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.694042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.694300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.694671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.694684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.694729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.694980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.695234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.695263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.695575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.695592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.695605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.120 [2024-07-24 18:32:00.695616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.697275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.697535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.697792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.697841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.698126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.698139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.698402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.698668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.698702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.698952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.699296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.699310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.699320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.699332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.701711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.701977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.702013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.702264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.702593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.702607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.702869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.702902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.703155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.703414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.703782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.703796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.703807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.121 [2024-07-24 18:32:00.703821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.705922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.705981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.706235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.706489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.706809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.706823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.706860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.707119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.707381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.707416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.707704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.707719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.707730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.707740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.709927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.710186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.710440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.710471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.710830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.710844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.711107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.711368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.711404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.711661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.711979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.711992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.712003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.712017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.714060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.714316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.714352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.714603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.714889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.714902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.715165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.715200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.715451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.715710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.716035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.716047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.716057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.716069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.718417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.718460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.718731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.386 [2024-07-24 18:32:00.719005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.719340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.719353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.719393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.719651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.719905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.719936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.720170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.720183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.720193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.720204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.721863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.722121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.722379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.722426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.722709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.722722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.722985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.723238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.723270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.723521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.723800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.723813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.723823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.723834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.726251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.726514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.726548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.726803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.727143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.727156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.727416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.727458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.727739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.727998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.728313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.728326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.728339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.728349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.730209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.730246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.730497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.730768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.731088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.731101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.731149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.731407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.731678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.731714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.732047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.732061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.732071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.732082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.734277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.734535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.734793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.734825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.735085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.735098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.735360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.735619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.735656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.735908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.736252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.736265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.736276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.736288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.737987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.738024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.738274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.738304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.738642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.738656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.739135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.739168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.739731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.739784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.739973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.739985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.739995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.740005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.742413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.742461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.743352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.743397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.743705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.743718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.743976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.744007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.744259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.744291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.387 [2024-07-24 18:32:00.744607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.744623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.744640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.744652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.746174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.746212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.746463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.746495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.746784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.746798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.747055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.747086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.747861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.747893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.748150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.748164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.748177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.748188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.751903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.751945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.753019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.753061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.753333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.753345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.754070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.754104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.755028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.755061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.755286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.755299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.755309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.755320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.756997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.757034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.757290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.757327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.757504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.757517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.758470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.758504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.758807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.758842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.759019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.759031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.759041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.759051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.762457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.762497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.763426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.763458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.763639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.763652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.764048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.764101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.765112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.765153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.765440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.765452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.765463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.765473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.767101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.767137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.767562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.767597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.767845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.767860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.768191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.768225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.768933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.768965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.769265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.769278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.769288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.769299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.772931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.772971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.773481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.773516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.773706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.773719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.774821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.774857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.775614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.775649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.775831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.775844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.775853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.775863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.777605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.777645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.778687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.778718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.388 [2024-07-24 18:32:00.778897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.778909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.779852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.779886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.780952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.780989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.781244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.781257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.781267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.781277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.784308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.784348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.784601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.784638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.784814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.784826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.785758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.785796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.786816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.786849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.787024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.787036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.787046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.787056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.788932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.788969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.788997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.789036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.789209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.789220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.789520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.789552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.789583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.789611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.789911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.789925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.789934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.789944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.792835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.792872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.792899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.792925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.793147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.793158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.793198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.793227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.793261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.793288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.793467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.793479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.793489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.793499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.794621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.794658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.794686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.794713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.795034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.795048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.795084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.795113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.795141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.795183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.795541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.795555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.795566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.795578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.798797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.800089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.800123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.800154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.800768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.800975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.800988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.801030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.801058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.801090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.801343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.801523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.801535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.389 [2024-07-24 18:32:00.801545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.801555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.803848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.804903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.804939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.805801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.806084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.806097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.806133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.806385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.806418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.806674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.806999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.807013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.807023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.807038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.808054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.808502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.808533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.809317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.809503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.809515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.809558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.810482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.810515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.811138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.811322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.811334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.811343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.811354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.813245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.814180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.814222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.815203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.815373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.815385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.815427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.816049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.816082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.816861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.817042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.817054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.817064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.817075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.818568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.818830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.818867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.819245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.819427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.819440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.819483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.820467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.820501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.821433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.821614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.821632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.821642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.821653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.824561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.824829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.824864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.825194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.825376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.825388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.825430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.825692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.825724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.826360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.826574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.826588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.826599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.826609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.827695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.827727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.828576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.828609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.828795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.828808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.828848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.829841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.829882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.829915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.830196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.830208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.830219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.830229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.832692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.390 [2024-07-24 18:32:00.833650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.833684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.833711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.833975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.833988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.834904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.834942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.834969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.835903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.836084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.836096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.836106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.836116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.838216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.838252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.838279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.838536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.838858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.838872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.838908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.838944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.839872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.839909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.840088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.840101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.840113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.840124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.842939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.842976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.843533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.843571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.843942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.843957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.843992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.844243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.844274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.844302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.844638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.844653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.844664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.844675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.845760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.846507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.846539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.846571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.846755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.846768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.847589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.847621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.847657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.848716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.848905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.848917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.848927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.848938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.852057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.852099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.852127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.852923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.853109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.853123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.853166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.853195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.854137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.854173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.854534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.854547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.854557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.854568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.855618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.855659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.855920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.855953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.856278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.856292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.856327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.856587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.856619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.856654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.391 [2024-07-24 18:32:00.856978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.856993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.857006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.857017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.860075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.861020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.861058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.861087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.861269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.861281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.861994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.862029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.862072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.863067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.863431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.863444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.863455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.863466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.866131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.866171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.866202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.867136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.867322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.867334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.867377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.867406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.868202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.868236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.868421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.868434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.868444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.868454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.871127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.871168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.871428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.871459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.871725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.871738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.871777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.872550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.872583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.872611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.872797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.872810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.872819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.872830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.875774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.876160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.876194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.876222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.876409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.876422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.876701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.876736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.876766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.877513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.877759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.877773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.877784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.877794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.881001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.881049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.881081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.882109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.882292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.882304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.882346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.882382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.883278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.883312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.883619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.883637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.883647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.883658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.886244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.887194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.887229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.887681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.887872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.887885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.887930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.888872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.888905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.889853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.890046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.890059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.890070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.890080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.893575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.894586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.894633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.895669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.392 [2024-07-24 18:32:00.895852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.895865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.895910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.896983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.897024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.897763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.897984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.897997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.898007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.898017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.900610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.900876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.900910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.901778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.901958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.901970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.902008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.903056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.903090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.903951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.904176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.904188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.904198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.904208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.906731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.906993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.907026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.907979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.908318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.908332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.908369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.908732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.908768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.909553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.909741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.909754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.909763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.909773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.912590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.913129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.913165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.913415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.913739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.913751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.913788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.914040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.914072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.914680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.914894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.914907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.914917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.914927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.917821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.918778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.918813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.919173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.919353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.919365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.919408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.919669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.919703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.920278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.920467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.920484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.920494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.920504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.922876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.923671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.923706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.924646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.924826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.924839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.924880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.925139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.925170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.925424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.925720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.925733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.925743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.925754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.928103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.928901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.928934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.929750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.930117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.930131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.930175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.931063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.931094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.931345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.931611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.393 [2024-07-24 18:32:00.931624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.931640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.931650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.934645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.935368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.935404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.936330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.936629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.936643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.936686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.936943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.936975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.937227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.937563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.937576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.937587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.937598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.940161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.940985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.941020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.941930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.942229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.942242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.942285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.942539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.942580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.943599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.943970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.943984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.943998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.944008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.946635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.947546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.947811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.948066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.948361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.948374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.948412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.948671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.949605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.950367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.950548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.950560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.950570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.950580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.952828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.953714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.953969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.954245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.954425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.954437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.954738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.955608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.956500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.956758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.957073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.957087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.957098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.957110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.960248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.960819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.961835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.962096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.962428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.962445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.963463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.963729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.964069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.964887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.965070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.965083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.965092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.965103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.968529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.968799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.969056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.969314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.969606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.969619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.970668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.970929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.971223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.972111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.972460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.972474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.972485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.972496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.974900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.975173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.975819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.975855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.976113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.394 [2024-07-24 18:32:00.976126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.976395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.977180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.977587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.977621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.977965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.977980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.977992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.978003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.980598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.980655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.981040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.981074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.981256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.981269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.981530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.981562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.982176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.982209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.982448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.982461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.982471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.982482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.984809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.984849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.985107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.985147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.985413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.985427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.986444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.986486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.986744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.986777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.987064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.987077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.987087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.987096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.990059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.990100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.990351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.990383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.990646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.990659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.990923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.990959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.991843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.991875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.992210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.992224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.992235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.992245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.659 [2024-07-24 18:32:00.994481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.994522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.994783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.994815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.995142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.995156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.995418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.995460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.995721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.995757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.995967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.995979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.995995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.996005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.998584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.998631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.998886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.998917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.999239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.999252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.999522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.999554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.999816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:00.999861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.000175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.000188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.000198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.000208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.003074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.003342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.003378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.003633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.003955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.003969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.004228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.004260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.004511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.004773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.005057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.005070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.005081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.005092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.009289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.009339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.009608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.009871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.010162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.010176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.010213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.010464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.010726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.010763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.011033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.011045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.011055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.011065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.014112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.014376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.014639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.014674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.015027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.015041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.015299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.015564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.015596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.015858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.016137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.016149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.016159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.016169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.018650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.018920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.018963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.019217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.019545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.019561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.019826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.019859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.020110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.020366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.020606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.020619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.020636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.020647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.660 [2024-07-24 18:32:01.024575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.024619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.024882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.025140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.025460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.025473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.025511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.025769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.026026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.026067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.026390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.026403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.026413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.026423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.029192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.029976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.030345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.030380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.030647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.030660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.030926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.031180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.031213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.031463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.031787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.031800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.031810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.031819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.034945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.035419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.035453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.036027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.036352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.036366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.036731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.036767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.037426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.038324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.038527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.038540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.038550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.038560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.041670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.041713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.042620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.042884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.043063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.043075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.043119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.043375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.043715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.043752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.043929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.043942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.043951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.043961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.045921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.046553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.047090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.047124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.047448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.047462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.048196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.048661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.048694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.048945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.049123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.049136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.049146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.049156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.053116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.053379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.053414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.053962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.054148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.054162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.054424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.054456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.055166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.055889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.056068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.056084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.661 [2024-07-24 18:32:01.056094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.056104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.060004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.060046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.060308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.060689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.060870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.060883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.060926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.061911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.062413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.062447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.062662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.062675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.062685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.062696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.065504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.065971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.066666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.066701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.066935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.066948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.067791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.068411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.068447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.068876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.069207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.069221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.069231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.069242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.073427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.073468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.073987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.074020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.074202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.074215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.075158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.075191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.076126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.076158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.076409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.076423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.076433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.076443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.081303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.081344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.082340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.082373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.082550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.082563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.083588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.083623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.084376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.084409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.084618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.084635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.084645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.084656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.088943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.088990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.089248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.089280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.089571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.089583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.090467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.090502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.091435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.091468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.091649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.091662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.091672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.091682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.095243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.095284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.095542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.095573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.095887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.662 [2024-07-24 18:32:01.095901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.096934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.096968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.097220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.097251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.097513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.097526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.097535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.097545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.101180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.101222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.102157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.102191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.102447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.102464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.103447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.103488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.103752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.103785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.104086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.104099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.104109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.104119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.108121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.108167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.108903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.108937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.109157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.109170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.110132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.110167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.111093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.111128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.111395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.111408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.111418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.111428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.116327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.116371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.117312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.117345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.117524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.117536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.118499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.118539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.119393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.119425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.119654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.119666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.119676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.119685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.124238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.124284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.124539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.124570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.124840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.124853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.125644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.125678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.126605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.126641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.126817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.126829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.126840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.126850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.130506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.130553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.130812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.130843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.131130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.131143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.132067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.132099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.132349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.132381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.132621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.132637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.132647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.132657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.136069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.136109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.137040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.137072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.137338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.137351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.138365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.138397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.138652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.663 [2024-07-24 18:32:01.138682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.138938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.138948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.138958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.138967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.142528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.142567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.142596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.142623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.142804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.142814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.143600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.143637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.143666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.143699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.143874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.143887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.143897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.143911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.146382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.146419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.146447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.146475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.146804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.146818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.146852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.146881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.146910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.146941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.147117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.147129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.147138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.147148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.149722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.149759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.149786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.149814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.149988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.150000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.150040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.150068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.150095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.150123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.150463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.150476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.150486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.150496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.154815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.157874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.157911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.157941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.158767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.159067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.159081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.159124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.664 [2024-07-24 18:32:01.159154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.159187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.159443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.159623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.159642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.159652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.159663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.162036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.162785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.162819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.163597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.163785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.163796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.163838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.164785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.164817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.165157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.165337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.165349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.165359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.165369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.167123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.168070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.168103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.169051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.169298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.169311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.169356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.170349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.170388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.171390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.171570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.171581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.171590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.171600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.174140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.174404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.174437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.175285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.175509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.175521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.175567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.176513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.176546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.177479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.177811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.177824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.177833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.177842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.180648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.180910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.180944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.181797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.182093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.182106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.182150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.182422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.182453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.183473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.183662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.183673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.183683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.183692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.186603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.187437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.187471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.188317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.188604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.188617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.188663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.188920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.188951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.190021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.190370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.190386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.190396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.190407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.192896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.192932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.193719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.193752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.193927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.193938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.193980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.194923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.194956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.194983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.195272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.195284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.195294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.195304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.665 [2024-07-24 18:32:01.198693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.199476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.199508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.199535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.199716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.199728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.200687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.200721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.200748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.201280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.201463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.201479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.201489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.201499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.204690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.204732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.204759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.205249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.205573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.205586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.205622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.205659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.206568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.206599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.206813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.206825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.206834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.206843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.209692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.209728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.210481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.210513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.210703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.210716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.210753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.211098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.211130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.211158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.211472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.211486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.211496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.211507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.214622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.215604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.215642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.215670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.215861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.215872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.216660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.216694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.216722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.217651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.217834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.217845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.217854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.217863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.220471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.220510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.220537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.221318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.221499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.221511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.221554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.221583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.222524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.222557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.222821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.222835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.222844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.222854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.225852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.225889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.226150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.226186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.226389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.226402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.226440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.226900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.226933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.226962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.227267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.227281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.227292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.227303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.230452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.231410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.231443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.231471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.231652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.231663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.232520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.232551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.666 [2024-07-24 18:32:01.232579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.233538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.233867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.233879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.233890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.233900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.236635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.236684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.236714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.237572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.237790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.237805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.237843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.237871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.238655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.238687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.238862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.238873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.238883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.238893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.241357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.241401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.241660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.241691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.241959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.241972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.242008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.242788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.242821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.242848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.243023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.243034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.243044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.243053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.245723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.246817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.246857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.246885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.247211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.247224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.247523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.247555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.247587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.248250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.248576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.248590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.248601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.667 [2024-07-24 18:32:01.248611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.252065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.252108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.252140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.253096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.253343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.253357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.253408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.253441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.254349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.254381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.254709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.254722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.254733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.254744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.257161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.257968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.258003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.258742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.258924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.258936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.258980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.259422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.259461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.260422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.260781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.260797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.260808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.260819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.262339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.263403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.263440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.264327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.264545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.264558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.264597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.932 [2024-07-24 18:32:01.265375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.265407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.266265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.266492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.266505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.266516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.266526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.269389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.270424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.270457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.271483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.271668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.271680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.271724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.272006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.272041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.272967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.273148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.273161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.273172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.273186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.275612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.276485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.276519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.277313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.277605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.277618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.277666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.278668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.278704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.279783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.280096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.280108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.280118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.280128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.282631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.283405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.283437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.284152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.284428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.284441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.284483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.285314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.285350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.285968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.286277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.286290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.286301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.286312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.288091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.288366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.288408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.289331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.289517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.289528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.289573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.289859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.289897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.290157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.290413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.290426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.290436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.290446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.292045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.292887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.292922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.293862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.294083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.294096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.294136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.295074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.295107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.296043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.296350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.296362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.296373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.296384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.299076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.299850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.299884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.300814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.301000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.301011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.301052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.301893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.301925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.302743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.302925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.302936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.302945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.302955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.304685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.933 [2024-07-24 18:32:01.304950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.304997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.305258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.305506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.305520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.305560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.305821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.305853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.306640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.306894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.306906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.306916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.306926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.308763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.309759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.309799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.310053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.310364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.310377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.310422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.310700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.310748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.311008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.311343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.311359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.311370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.311381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.312931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.313193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.313451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.314371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.314681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.314693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.314738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.314993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.315255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.315515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.315847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.315863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.315873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.315884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.317705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.317972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.318541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.319143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.319459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.319472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.319738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.319998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.320254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.320513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.320701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.320714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.320724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.320734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.322929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.323893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.324149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.324406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.324683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.324696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.324962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.325217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.325826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.326380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.326695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.326709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.326720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.326731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.328964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.329399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.329659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.329917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.330182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.330194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.330459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.330722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.331644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.331898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.332203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.332216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.332230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.332241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.334567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.334835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.335091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.934 [2024-07-24 18:32:01.335127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.335354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.335367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.335637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.335897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.336897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.336939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.337297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.337310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.337321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.337334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.339046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.339085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.339853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.339884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.340201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.340214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.340474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.340514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.340775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.340811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.341054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.341067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.341078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.341088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.342815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.342855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.343109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.343159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.343480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.343493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.343809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.343844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.344568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.344598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.344928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.344942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.344953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.344965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.347105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.347143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.347533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.347565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.347921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.347934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.348216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.348251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.348516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.348563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.348912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.348930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.348941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.348952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.350690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.350736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.351000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.351038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.351364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.351378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.352194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.352230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.352493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.352525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.352854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.352868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.352878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.352888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.355315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.355352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.355610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.355646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.355950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.355965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.356250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.356293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.356556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.356590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.356908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.356923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.356934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.356944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.358612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.358884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.358919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.359406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.359591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.359610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.359886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.359921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.935 [2024-07-24 18:32:01.360318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.361094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.361374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.361386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.361396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.361406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.363016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.363058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.363917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.364233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.364420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.364433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.364478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.364796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.365053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.365087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.365269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.365281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.365291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.365301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.366757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.367417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.367920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.367970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.368294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.368308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.368777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.369467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.369502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.369756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.369963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.369975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.369985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.369996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.371362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.371629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.371664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.372497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.372831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.372846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.373219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.373253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.373924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.374296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.374479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.374492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.374502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.374512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.376139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.376177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.377168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.378108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.378438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.378450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.378494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.379309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.380310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.380345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.380522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.380537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.380547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.380557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.382034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.382300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.383363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.383410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.383587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.383598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.383868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.384810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.384848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.385727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.385973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.385985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.385995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.386006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.388558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.389487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.389521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.390582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.390773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.390784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.391049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.391085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.391950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.392206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.936 [2024-07-24 18:32:01.392501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.392515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.392525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.392538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.394187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.394227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.394480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.394749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.395077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.395091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.395129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.395738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.396276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.396308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.396529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.396541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.396551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.396561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.398137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.398753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.399669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.399701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.399878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.399889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.400838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.401395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.401429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.402337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.402518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.402528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.402538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.402547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.404079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.404378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.404414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.405191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.405370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.405381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.406405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.406440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.407292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.408083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.408293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.408304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.408314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.408324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.409664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.409701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.409952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.410728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.410940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.410951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.410992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.411919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.412840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.412872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.413124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.413136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.413146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.413156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.414102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.414529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.414791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.414833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.415174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.415187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.415450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.416393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.416429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.417360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.417539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.417549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.417559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.937 [2024-07-24 18:32:01.417568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.419637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.419680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.420571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.420602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.420886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.420899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.421161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.421194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.421447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.421480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.421678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.421692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.421702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.421712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.423395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.423430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.424211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.424244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.424420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.424432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.425374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.425410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.425711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.425744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.426090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.426102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.426112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.426122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.428433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.428471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.429477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.429516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.429808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.429821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.430602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.430638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.431559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.431590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.431776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.431787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.431798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.431806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.433952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.433987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.434754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.434786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.434960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.434972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.435929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.435964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.436381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.436417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.436595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.436608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.436618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.436633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.437880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.437919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.438171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.438201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.438466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.438478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.439268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.439301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.440212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.440244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.440419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.440430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.440439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.440449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.442408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.442445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.443077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.443109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.443460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.443474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.443740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.443775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.444029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.444060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.444237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.444251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.444265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.444275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.446220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.446262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.447210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.447241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.447417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.447427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.448457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.938 [2024-07-24 18:32:01.448489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.448744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.448777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.449016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.449029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.449039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.449050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.451144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.451180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.451837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.451867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.452043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.452054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.452840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.452873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.453798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.453829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.454005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.454016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.454026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.454036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.456649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.456692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.457617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.457652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.457828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.457839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.458794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.458827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.459643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.459675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.459881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.459892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.459901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.459910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.461260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.461295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.461548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.461580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.461762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.461775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.462737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.462775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.463756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.463787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.463964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.463975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.463985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.463996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.466009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.466045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.466072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.466103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.466408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.466421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.466682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.466714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.466743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.466772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.467061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.467073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.467083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.467092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.468735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.469880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.469912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.469940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.469969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.470343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.470356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.470394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.470423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.470451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.470481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.470697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.470708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.470725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.470734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.939 [2024-07-24 18:32:01.471722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.471755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.471782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.471816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.471990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.472002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.472044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.472076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.472110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.472137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.472315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.472327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.472336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.472345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.473639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.473670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.473699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.473954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.474134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.474146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.474182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.474211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.474252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.475230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.475410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.475421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.475430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.475440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.476497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.477535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.477574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.478485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.478780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.478793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.478829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.479084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.479114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.479365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.479543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.479555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.479565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.479575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.480590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.481606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.481640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.482652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.482831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.482842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.482879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.483915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.483949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.484200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.484485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.484501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.484511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.484521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.485712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.486648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.486680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.487149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.487331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.487344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.487386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.488380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.488420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.489368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.489549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.489560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.489569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.489578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.491157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.491936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.491969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.492893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.493072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.493082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.493124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.493751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.493790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.494763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.494943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.494954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.494964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.494977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.496237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.496496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.496526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.497182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.497419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.497431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.497470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.498401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.940 [2024-07-24 18:32:01.498434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.499357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.499596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.499608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.499618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.499632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.500577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.500607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.501040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.501072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.501424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.501436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.501473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.501731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.501762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.501791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.501991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.502004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.502014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.502024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.503001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.503837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.503875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.503908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.504086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.504097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.505046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.505079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.505106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.506175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.506466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.506478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.506487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.506497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.508772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.508808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.508835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.509762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.509957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.509968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.510010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.510039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.511049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.511088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.511266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.511277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.511287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.511296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.512523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.512557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.512813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.512844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.513072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.513084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.513122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.513905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.513938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.513965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.514143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.514154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.514163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.514172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.515193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.516126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.516158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.516185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.516362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.516373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.516712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.516744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.516772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.517025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.517355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.517368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.517379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.517390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.519004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.519042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.941 [2024-07-24 18:32:01.519069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.942 [2024-07-24 18:32:01.519856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.942 [2024-07-24 18:32:01.520037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.942 [2024-07-24 18:32:01.520049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.942 [2024-07-24 18:32:01.520092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.942 [2024-07-24 18:32:01.520123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.942 [2024-07-24 18:32:01.521057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.942 [2024-07-24 18:32:01.521090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.942 [2024-07-24 18:32:01.521394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.942 [2024-07-24 18:32:01.521407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.942 [2024-07-24 18:32:01.521417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:52.942 [2024-07-24 18:32:01.521427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.522799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.522833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.523761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.523795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.524004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.524018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.524079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.525030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.525065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.525098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.525288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.525300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.525310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.525320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.526846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.526899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.526929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.527191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.528142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.528178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.528205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.528815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.528999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.529011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.529024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.529034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.530235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.530268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.530637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.530897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.530937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.531157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.531169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.201 [2024-07-24 18:32:01.531179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:53.460 00:28:53.460 Latency(us) 00:28:53.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.460 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:53.460 Verification LBA range: start 0x0 length 0x100 00:28:53.460 crypto_ram : 5.76 64.45 4.03 0.00 0.00 1925126.75 135895.45 1624034.51 00:28:53.460 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:53.460 Verification LBA range: start 0x100 length 0x100 00:28:53.460 crypto_ram : 5.61 59.57 3.72 0.00 0.00 2036354.68 91016.40 1731408.69 00:28:53.460 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:53.460 Verification LBA range: start 0x0 length 0x100 00:28:53.460 crypto_ram1 : 5.76 66.46 4.15 0.00 0.00 1847790.22 112407.35 1496527.67 00:28:53.460 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:53.460 Verification LBA range: start 0x100 length 0x100 00:28:53.460 crypto_ram1 : 5.67 65.41 4.09 0.00 0.00 1860634.52 88499.81 1590480.08 00:28:53.460 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:53.460 Verification LBA range: start 0x0 length 0x100 00:28:53.460 crypto_ram2 : 5.40 430.28 26.89 0.00 0.00 276301.45 38377.88 412719.51 00:28:53.460 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:53.460 Verification LBA range: start 0x100 length 0x100 00:28:53.460 crypto_ram2 : 5.40 422.99 26.44 0.00 0.00 281296.46 45298.48 424463.56 00:28:53.460 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:53.460 Verification LBA range: start 0x0 length 0x100 00:28:53.460 crypto_ram3 : 5.48 440.87 27.55 0.00 0.00 264719.66 31667.00 312056.22 00:28:53.460 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:53.460 Verification LBA range: start 0x100 length 0x100 00:28:53.460 crypto_ram3 : 5.45 432.75 27.05 0.00 0.00 269363.78 13841.20 328833.43 00:28:53.461 =================================================================================================================== 00:28:53.461 Total : 1982.78 123.92 0.00 0.00 493654.27 13841.20 1731408.69 00:28:54.028 00:28:54.028 real 0m8.716s 00:28:54.028 user 0m16.647s 00:28:54.028 sys 0m0.418s 00:28:54.028 18:32:02 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:54.028 18:32:02 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:28:54.028 ************************************ 00:28:54.028 END TEST bdev_verify_big_io 00:28:54.028 ************************************ 00:28:54.028 18:32:02 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:54.028 18:32:02 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:28:54.028 18:32:02 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:54.028 18:32:02 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:54.028 ************************************ 00:28:54.028 START TEST bdev_write_zeroes 00:28:54.028 ************************************ 00:28:54.028 18:32:02 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:54.028 [2024-07-24 18:32:02.474860] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:28:54.028 [2024-07-24 18:32:02.474900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370741 ] 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:01.0 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:01.1 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:01.2 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:01.3 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:01.4 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:01.5 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:01.6 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:01.7 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:02.0 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:02.1 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:02.2 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:02.3 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:02.4 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:02.5 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:02.6 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b3:02.7 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:01.0 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:01.1 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:01.2 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:01.3 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:01.4 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:01.5 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:01.6 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:01.7 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:02.0 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:02.1 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:02.2 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:02.3 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:02.4 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:02.5 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:02.6 cannot be used 00:28:54.028 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.028 EAL: Requested device 0000:b5:02.7 cannot be used 00:28:54.028 [2024-07-24 18:32:02.565820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.285 [2024-07-24 18:32:02.636737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.285 [2024-07-24 18:32:02.657669] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:28:54.285 [2024-07-24 18:32:02.665695] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:54.285 [2024-07-24 18:32:02.673715] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:54.285 [2024-07-24 18:32:02.781163] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:56.820 [2024-07-24 18:32:04.935077] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:56.820 [2024-07-24 18:32:04.935129] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:56.820 [2024-07-24 18:32:04.935139] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:56.820 [2024-07-24 18:32:04.943096] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:56.820 [2024-07-24 18:32:04.943109] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:56.820 [2024-07-24 18:32:04.943117] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:56.820 [2024-07-24 18:32:04.951117] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:56.820 [2024-07-24 18:32:04.951129] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:56.820 [2024-07-24 18:32:04.951136] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:56.820 [2024-07-24 18:32:04.959137] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:56.820 [2024-07-24 18:32:04.959148] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:56.820 [2024-07-24 18:32:04.959155] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:56.820 Running I/O for 1 seconds... 00:28:57.756 00:28:57.756 Latency(us) 00:28:57.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.756 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:57.756 crypto_ram : 1.02 3182.23 12.43 0.00 0.00 40047.78 3434.09 46766.49 00:28:57.756 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:57.756 crypto_ram1 : 1.02 3187.72 12.45 0.00 0.00 39836.91 3460.30 43411.05 00:28:57.756 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:57.756 crypto_ram2 : 1.01 24840.99 97.04 0.00 0.00 5105.71 1507.33 6553.60 00:28:57.756 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:57.756 crypto_ram3 : 1.01 24819.27 96.95 0.00 0.00 5096.03 1507.33 5478.81 00:28:57.756 =================================================================================================================== 00:28:57.756 Total : 56030.21 218.87 0.00 0.00 9072.50 1507.33 46766.49 00:28:58.014 00:28:58.014 real 0m3.945s 00:28:58.014 user 0m3.593s 00:28:58.014 sys 0m0.305s 00:28:58.014 18:32:06 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.014 18:32:06 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:28:58.014 ************************************ 00:28:58.014 END TEST bdev_write_zeroes 00:28:58.014 ************************************ 00:28:58.014 18:32:06 blockdev_crypto_qat -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:58.014 18:32:06 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:28:58.014 18:32:06 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.014 18:32:06 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:58.014 ************************************ 00:28:58.014 START TEST bdev_json_nonenclosed 00:28:58.014 ************************************ 00:28:58.014 18:32:06 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:58.014 [2024-07-24 18:32:06.495350] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:28:58.014 [2024-07-24 18:32:06.495392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371539 ] 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:01.0 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:01.1 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:01.2 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:01.3 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:01.4 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:01.5 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:01.6 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:01.7 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:02.0 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:02.1 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:02.2 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:02.3 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:02.4 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:02.5 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:02.6 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b3:02.7 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:01.0 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:01.1 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:01.2 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:01.3 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:01.4 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:01.5 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:01.6 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:01.7 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:02.0 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:02.1 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:02.2 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:02.3 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:02.4 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:02.5 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:02.6 cannot be used 00:28:58.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.014 EAL: Requested device 0000:b5:02.7 cannot be used 00:28:58.014 [2024-07-24 18:32:06.585527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.272 [2024-07-24 18:32:06.654827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.272 [2024-07-24 18:32:06.654880] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:58.272 [2024-07-24 18:32:06.654892] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:58.272 [2024-07-24 18:32:06.654900] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:58.272 00:28:58.272 real 0m0.281s 00:28:58.272 user 0m0.168s 00:28:58.272 sys 0m0.111s 00:28:58.272 18:32:06 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.272 18:32:06 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:28:58.272 ************************************ 00:28:58.272 END TEST bdev_json_nonenclosed 00:28:58.272 ************************************ 00:28:58.272 18:32:06 blockdev_crypto_qat -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:58.272 18:32:06 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:28:58.272 18:32:06 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.272 18:32:06 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:58.272 ************************************ 00:28:58.272 START TEST bdev_json_nonarray 00:28:58.272 ************************************ 00:28:58.272 18:32:06 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:58.272 [2024-07-24 18:32:06.853011] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:28:58.272 [2024-07-24 18:32:06.853051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371565 ] 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:01.0 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:01.1 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:01.2 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:01.3 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:01.4 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:01.5 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:01.6 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:01.7 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:02.0 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:02.1 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:02.2 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:02.3 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:02.4 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:02.5 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:02.6 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b3:02.7 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b5:01.0 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b5:01.1 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b5:01.2 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b5:01.3 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b5:01.4 cannot be used 00:28:58.531 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.531 EAL: Requested device 0000:b5:01.5 cannot be used 00:28:58.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.532 EAL: Requested device 0000:b5:01.6 cannot be used 00:28:58.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.532 EAL: Requested device 0000:b5:01.7 cannot be used 00:28:58.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.532 EAL: Requested device 0000:b5:02.0 cannot be used 00:28:58.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.532 EAL: Requested device 0000:b5:02.1 cannot be used 00:28:58.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.532 EAL: Requested device 0000:b5:02.2 cannot be used 00:28:58.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.532 EAL: Requested device 0000:b5:02.3 cannot be used 00:28:58.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.532 EAL: Requested device 0000:b5:02.4 cannot be used 00:28:58.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.532 EAL: Requested device 0000:b5:02.5 cannot be used 00:28:58.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.532 EAL: Requested device 0000:b5:02.6 cannot be used 00:28:58.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.532 EAL: Requested device 0000:b5:02.7 cannot be used 00:28:58.532 [2024-07-24 18:32:06.942009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.532 [2024-07-24 18:32:07.010882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.532 [2024-07-24 18:32:07.010938] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:58.532 [2024-07-24 18:32:07.010950] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:58.532 [2024-07-24 18:32:07.010958] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:58.532 00:28:58.532 real 0m0.280s 00:28:58.532 user 0m0.165s 00:28:58.532 sys 0m0.113s 00:28:58.532 18:32:07 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.532 18:32:07 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:28:58.532 ************************************ 00:28:58.532 END TEST bdev_json_nonarray 00:28:58.532 ************************************ 00:28:58.532 18:32:07 blockdev_crypto_qat -- bdev/blockdev.sh@786 -- # [[ crypto_qat == bdev ]] 00:28:58.532 18:32:07 blockdev_crypto_qat -- bdev/blockdev.sh@793 -- # [[ crypto_qat == gpt ]] 00:28:58.532 18:32:07 blockdev_crypto_qat -- bdev/blockdev.sh@797 -- # [[ crypto_qat == crypto_sw ]] 00:28:58.532 18:32:07 blockdev_crypto_qat -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:28:58.532 18:32:07 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # cleanup 00:28:58.532 18:32:07 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:28:58.532 18:32:07 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:28:58.790 18:32:07 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:28:58.790 18:32:07 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:28:58.790 18:32:07 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:28:58.790 18:32:07 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:28:58.790 00:28:58.790 real 1m7.597s 00:28:58.790 user 2m43.748s 00:28:58.790 sys 0m7.520s 00:28:58.790 18:32:07 blockdev_crypto_qat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.790 18:32:07 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:58.790 ************************************ 00:28:58.790 END TEST blockdev_crypto_qat 00:28:58.790 ************************************ 00:28:58.790 18:32:07 -- spdk/autotest.sh@364 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:28:58.790 18:32:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:58.790 18:32:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.790 18:32:07 -- common/autotest_common.sh@10 -- # set +x 00:28:58.790 ************************************ 00:28:58.790 START TEST chaining 00:28:58.790 ************************************ 00:28:58.790 18:32:07 chaining -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:28:58.790 * Looking for test storage... 00:28:58.790 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:28:58.790 18:32:07 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@7 -- # uname -s 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:28:58.790 18:32:07 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.790 18:32:07 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.790 18:32:07 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.790 18:32:07 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.790 18:32:07 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.790 18:32:07 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.790 18:32:07 chaining -- paths/export.sh@5 -- # export PATH 00:28:58.790 18:32:07 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@47 -- # : 0 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:58.790 18:32:07 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:28:58.790 18:32:07 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:28:58.790 18:32:07 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:28:58.790 18:32:07 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:28:58.790 18:32:07 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:28:58.790 18:32:07 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.790 18:32:07 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:58.790 18:32:07 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:58.790 18:32:07 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:28:58.790 18:32:07 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@296 -- # e810=() 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@297 -- # x722=() 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@298 -- # mlx=() 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.943 18:32:15 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.202 18:32:15 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.202 18:32:15 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.202 18:32:15 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:b7:00.0 (0x8086 - 0x159b)' 00:29:07.203 Found 0000:b7:00.0 (0x8086 - 0x159b) 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:b7:00.1 (0x8086 - 0x159b)' 00:29:07.203 Found 0000:b7:00.1 (0x8086 - 0x159b) 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:b7:00.0: cvl_0_0' 00:29:07.203 Found net devices under 0000:b7:00.0: cvl_0_0 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:b7:00.1: cvl_0_1' 00:29:07.203 Found net devices under 0000:b7:00.1: cvl_0_1 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:07.203 18:32:15 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:07.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:29:07.462 00:29:07.462 --- 10.0.0.2 ping statistics --- 00:29:07.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.462 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:29:07.462 00:29:07.462 --- 10.0.0.1 ping statistics --- 00:29:07.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.462 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@422 -- # return 0 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:07.462 18:32:15 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:07.462 18:32:15 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:07.462 18:32:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@481 -- # nvmfpid=2375854 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:07.462 18:32:15 chaining -- nvmf/common.sh@482 -- # waitforlisten 2375854 00:29:07.462 18:32:15 chaining -- common/autotest_common.sh@831 -- # '[' -z 2375854 ']' 00:29:07.462 18:32:15 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.462 18:32:15 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:07.462 18:32:15 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.462 18:32:15 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:07.463 18:32:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:07.463 [2024-07-24 18:32:15.943571] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:29:07.463 [2024-07-24 18:32:15.943622] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:01.0 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:01.1 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:01.2 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:01.3 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:01.4 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:01.5 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:01.6 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:01.7 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:02.0 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:02.1 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:02.2 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:02.3 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:02.4 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:02.5 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:02.6 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b3:02.7 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:01.0 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:01.1 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:01.2 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:01.3 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:01.4 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:01.5 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:01.6 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:01.7 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:02.0 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:02.1 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:02.2 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:02.3 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:02.4 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:02.5 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:02.6 cannot be used 00:29:07.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.463 EAL: Requested device 0000:b5:02.7 cannot be used 00:29:07.463 [2024-07-24 18:32:16.042541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.722 [2024-07-24 18:32:16.115030] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.722 [2024-07-24 18:32:16.115068] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.722 [2024-07-24 18:32:16.115078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.722 [2024-07-24 18:32:16.115086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.722 [2024-07-24 18:32:16.115093] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.722 [2024-07-24 18:32:16.115114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.291 18:32:16 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:08.291 18:32:16 chaining -- common/autotest_common.sh@864 -- # return 0 00:29:08.291 18:32:16 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:08.291 18:32:16 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:08.291 18:32:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:08.291 18:32:16 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@69 -- # mktemp 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.4nsM9BHVJN 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@69 -- # mktemp 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.mUrATa4Qlc 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:29:08.291 18:32:16 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.291 18:32:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:08.291 malloc0 00:29:08.291 true 00:29:08.291 true 00:29:08.291 [2024-07-24 18:32:16.821108] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:08.291 crypto0 00:29:08.291 [2024-07-24 18:32:16.829134] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:29:08.291 crypto1 00:29:08.291 [2024-07-24 18:32:16.837228] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.291 [2024-07-24 18:32:16.853383] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.291 18:32:16 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@85 -- # update_stats 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:08.291 18:32:16 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:08.291 18:32:16 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.291 18:32:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:08.550 18:32:16 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:08.550 18:32:16 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.550 18:32:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:08.550 18:32:16 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:08.550 18:32:16 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:08.550 18:32:16 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.550 18:32:16 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:08.550 18:32:16 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.550 18:32:17 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:29:08.550 18:32:17 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:29:08.550 18:32:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:08.551 18:32:17 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.551 18:32:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:08.551 18:32:17 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.4nsM9BHVJN bs=1K count=64 00:29:08.551 64+0 records in 00:29:08.551 64+0 records out 00:29:08.551 65536 bytes (66 kB, 64 KiB) copied, 0.000306001 s, 214 MB/s 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.4nsM9BHVJN --ob Nvme0n1 --bs 65536 --count 1 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@25 -- # local config 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:29:08.551 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@31 -- # config='{ 00:29:08.551 "subsystems": [ 00:29:08.551 { 00:29:08.551 "subsystem": "bdev", 00:29:08.551 "config": [ 00:29:08.551 { 00:29:08.551 "method": "bdev_nvme_attach_controller", 00:29:08.551 "params": { 00:29:08.551 "trtype": "tcp", 00:29:08.551 "adrfam": "IPv4", 00:29:08.551 "name": "Nvme0", 00:29:08.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.551 "traddr": "10.0.0.2", 00:29:08.551 "trsvcid": "4420" 00:29:08.551 } 00:29:08.551 }, 00:29:08.551 { 00:29:08.551 "method": "bdev_set_options", 00:29:08.551 "params": { 00:29:08.551 "bdev_auto_examine": false 00:29:08.551 } 00:29:08.551 } 00:29:08.551 ] 00:29:08.551 } 00:29:08.551 ] 00:29:08.551 }' 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.4nsM9BHVJN --ob Nvme0n1 --bs 65536 --count 1 00:29:08.551 18:32:17 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:29:08.551 "subsystems": [ 00:29:08.551 { 00:29:08.551 "subsystem": "bdev", 00:29:08.551 "config": [ 00:29:08.551 { 00:29:08.551 "method": "bdev_nvme_attach_controller", 00:29:08.551 "params": { 00:29:08.551 "trtype": "tcp", 00:29:08.551 "adrfam": "IPv4", 00:29:08.551 "name": "Nvme0", 00:29:08.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.551 "traddr": "10.0.0.2", 00:29:08.551 "trsvcid": "4420" 00:29:08.551 } 00:29:08.551 }, 00:29:08.551 { 00:29:08.551 "method": "bdev_set_options", 00:29:08.551 "params": { 00:29:08.551 "bdev_auto_examine": false 00:29:08.551 } 00:29:08.551 } 00:29:08.551 ] 00:29:08.551 } 00:29:08.551 ] 00:29:08.551 }' 00:29:08.810 [2024-07-24 18:32:17.151512] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:29:08.810 [2024-07-24 18:32:17.151558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375992 ] 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:01.0 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:01.1 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:01.2 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:01.3 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:01.4 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:01.5 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:01.6 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:01.7 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:02.0 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:02.1 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:02.2 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:02.3 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:02.4 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:02.5 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:02.6 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b3:02.7 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:01.0 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:01.1 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:01.2 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:01.3 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:01.4 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:01.5 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:01.6 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:01.7 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:02.0 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:02.1 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:02.2 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:02.3 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:02.4 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:02.5 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:02.6 cannot be used 00:29:08.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.810 EAL: Requested device 0000:b5:02.7 cannot be used 00:29:08.810 [2024-07-24 18:32:17.243992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.810 [2024-07-24 18:32:17.314109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.329  Copying: 64/64 [kB] (average 20 MBps) 00:29:09.329 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:09.329 18:32:17 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.329 18:32:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:09.329 18:32:17 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:09.329 18:32:17 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.329 18:32:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:09.329 18:32:17 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:09.329 18:32:17 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.329 18:32:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:09.329 18:32:17 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:29:09.329 18:32:17 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:09.330 18:32:17 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.330 18:32:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:09.330 18:32:17 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@96 -- # update_stats 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:09.330 18:32:17 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.330 18:32:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:09.330 18:32:17 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:09.330 18:32:17 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:09.589 18:32:17 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.589 18:32:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:09.589 18:32:17 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:09.589 18:32:17 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:09.589 18:32:17 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.589 18:32:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:09.589 18:32:17 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:09.589 18:32:18 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.589 18:32:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:09.589 18:32:18 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.mUrATa4Qlc --ib Nvme0n1 --bs 65536 --count 1 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@25 -- # local config 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:29:09.589 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:29:09.589 18:32:18 chaining -- bdev/chaining.sh@31 -- # config='{ 00:29:09.589 "subsystems": [ 00:29:09.589 { 00:29:09.589 "subsystem": "bdev", 00:29:09.589 "config": [ 00:29:09.589 { 00:29:09.589 "method": "bdev_nvme_attach_controller", 00:29:09.589 "params": { 00:29:09.589 "trtype": "tcp", 00:29:09.589 "adrfam": "IPv4", 00:29:09.589 "name": "Nvme0", 00:29:09.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:09.589 "traddr": "10.0.0.2", 00:29:09.589 "trsvcid": "4420" 00:29:09.589 } 00:29:09.589 }, 00:29:09.589 { 00:29:09.589 "method": "bdev_set_options", 00:29:09.590 "params": { 00:29:09.590 "bdev_auto_examine": false 00:29:09.590 } 00:29:09.590 } 00:29:09.590 ] 00:29:09.590 } 00:29:09.590 ] 00:29:09.590 }' 00:29:09.590 18:32:18 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.mUrATa4Qlc --ib Nvme0n1 --bs 65536 --count 1 00:29:09.590 18:32:18 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:29:09.590 "subsystems": [ 00:29:09.590 { 00:29:09.590 "subsystem": "bdev", 00:29:09.590 "config": [ 00:29:09.590 { 00:29:09.590 "method": "bdev_nvme_attach_controller", 00:29:09.590 "params": { 00:29:09.590 "trtype": "tcp", 00:29:09.590 "adrfam": "IPv4", 00:29:09.590 "name": "Nvme0", 00:29:09.590 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:09.590 "traddr": "10.0.0.2", 00:29:09.590 "trsvcid": "4420" 00:29:09.590 } 00:29:09.590 }, 00:29:09.590 { 00:29:09.590 "method": "bdev_set_options", 00:29:09.590 "params": { 00:29:09.590 "bdev_auto_examine": false 00:29:09.590 } 00:29:09.590 } 00:29:09.590 ] 00:29:09.590 } 00:29:09.590 ] 00:29:09.590 }' 00:29:09.590 [2024-07-24 18:32:18.148315] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:29:09.590 [2024-07-24 18:32:18.148360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376228 ] 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:01.0 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:01.1 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:01.2 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:01.3 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:01.4 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:01.5 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:01.6 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:01.7 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:02.0 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:02.1 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:02.2 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:02.3 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:02.4 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:02.5 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:02.6 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b3:02.7 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b5:01.0 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b5:01.1 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b5:01.2 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b5:01.3 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b5:01.4 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b5:01.5 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b5:01.6 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b5:01.7 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b5:02.0 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b5:02.1 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b5:02.2 cannot be used 00:29:09.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.849 EAL: Requested device 0000:b5:02.3 cannot be used 00:29:09.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.850 EAL: Requested device 0000:b5:02.4 cannot be used 00:29:09.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.850 EAL: Requested device 0000:b5:02.5 cannot be used 00:29:09.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.850 EAL: Requested device 0000:b5:02.6 cannot be used 00:29:09.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:09.850 EAL: Requested device 0000:b5:02.7 cannot be used 00:29:09.850 [2024-07-24 18:32:18.240351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.850 [2024-07-24 18:32:18.311160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.368  Copying: 64/64 [kB] (average 20 MBps) 00:29:10.368 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:10.368 18:32:18 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.368 18:32:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:10.368 18:32:18 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:10.368 18:32:18 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.368 18:32:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:10.368 18:32:18 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:10.368 18:32:18 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.368 18:32:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:10.368 18:32:18 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:10.368 18:32:18 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.368 18:32:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:10.368 18:32:18 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:10.368 18:32:18 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.628 18:32:18 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:29:10.628 18:32:18 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.4nsM9BHVJN /tmp/tmp.mUrATa4Qlc 00:29:10.628 18:32:18 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:29:10.628 18:32:18 chaining -- bdev/chaining.sh@25 -- # local config 00:29:10.628 18:32:18 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:29:10.628 18:32:18 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:29:10.628 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:29:10.628 18:32:19 chaining -- bdev/chaining.sh@31 -- # config='{ 00:29:10.628 "subsystems": [ 00:29:10.628 { 00:29:10.628 "subsystem": "bdev", 00:29:10.628 "config": [ 00:29:10.628 { 00:29:10.628 "method": "bdev_nvme_attach_controller", 00:29:10.628 "params": { 00:29:10.628 "trtype": "tcp", 00:29:10.628 "adrfam": "IPv4", 00:29:10.628 "name": "Nvme0", 00:29:10.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.628 "traddr": "10.0.0.2", 00:29:10.628 "trsvcid": "4420" 00:29:10.628 } 00:29:10.628 }, 00:29:10.628 { 00:29:10.628 "method": "bdev_set_options", 00:29:10.628 "params": { 00:29:10.628 "bdev_auto_examine": false 00:29:10.628 } 00:29:10.628 } 00:29:10.628 ] 00:29:10.628 } 00:29:10.628 ] 00:29:10.628 }' 00:29:10.628 18:32:19 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:29:10.628 18:32:19 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:29:10.628 "subsystems": [ 00:29:10.628 { 00:29:10.628 "subsystem": "bdev", 00:29:10.628 "config": [ 00:29:10.628 { 00:29:10.628 "method": "bdev_nvme_attach_controller", 00:29:10.628 "params": { 00:29:10.628 "trtype": "tcp", 00:29:10.628 "adrfam": "IPv4", 00:29:10.628 "name": "Nvme0", 00:29:10.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.628 "traddr": "10.0.0.2", 00:29:10.628 "trsvcid": "4420" 00:29:10.628 } 00:29:10.628 }, 00:29:10.628 { 00:29:10.628 "method": "bdev_set_options", 00:29:10.628 "params": { 00:29:10.628 "bdev_auto_examine": false 00:29:10.628 } 00:29:10.628 } 00:29:10.628 ] 00:29:10.628 } 00:29:10.628 ] 00:29:10.628 }' 00:29:10.628 [2024-07-24 18:32:19.055018] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:29:10.628 [2024-07-24 18:32:19.055066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376456 ] 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:01.0 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:01.1 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:01.2 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:01.3 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:01.4 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:01.5 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:01.6 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:01.7 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:02.0 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:02.1 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:02.2 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:02.3 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:02.4 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:02.5 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.628 EAL: Requested device 0000:b3:02.6 cannot be used 00:29:10.628 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b3:02.7 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:01.0 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:01.1 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:01.2 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:01.3 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:01.4 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:01.5 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:01.6 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:01.7 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:02.0 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:02.1 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:02.2 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:02.3 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:02.4 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:02.5 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:02.6 cannot be used 00:29:10.629 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:10.629 EAL: Requested device 0000:b5:02.7 cannot be used 00:29:10.629 [2024-07-24 18:32:19.148895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.888 [2024-07-24 18:32:19.224472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.147  Copying: 64/64 [kB] (average 10 MBps) 00:29:11.147 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@106 -- # update_stats 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:11.147 18:32:19 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.147 18:32:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:11.147 18:32:19 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:11.147 18:32:19 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.147 18:32:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:11.147 18:32:19 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:11.147 18:32:19 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.147 18:32:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:11.147 18:32:19 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:11.147 18:32:19 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:11.407 18:32:19 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.407 18:32:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:11.407 18:32:19 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.4nsM9BHVJN --ob Nvme0n1 --bs 4096 --count 16 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@25 -- # local config 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:29:11.407 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@31 -- # config='{ 00:29:11.407 "subsystems": [ 00:29:11.407 { 00:29:11.407 "subsystem": "bdev", 00:29:11.407 "config": [ 00:29:11.407 { 00:29:11.407 "method": "bdev_nvme_attach_controller", 00:29:11.407 "params": { 00:29:11.407 "trtype": "tcp", 00:29:11.407 "adrfam": "IPv4", 00:29:11.407 "name": "Nvme0", 00:29:11.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:11.407 "traddr": "10.0.0.2", 00:29:11.407 "trsvcid": "4420" 00:29:11.407 } 00:29:11.407 }, 00:29:11.407 { 00:29:11.407 "method": "bdev_set_options", 00:29:11.407 "params": { 00:29:11.407 "bdev_auto_examine": false 00:29:11.407 } 00:29:11.407 } 00:29:11.407 ] 00:29:11.407 } 00:29:11.407 ] 00:29:11.407 }' 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.4nsM9BHVJN --ob Nvme0n1 --bs 4096 --count 16 00:29:11.407 18:32:19 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:29:11.407 "subsystems": [ 00:29:11.407 { 00:29:11.407 "subsystem": "bdev", 00:29:11.407 "config": [ 00:29:11.407 { 00:29:11.407 "method": "bdev_nvme_attach_controller", 00:29:11.407 "params": { 00:29:11.407 "trtype": "tcp", 00:29:11.407 "adrfam": "IPv4", 00:29:11.407 "name": "Nvme0", 00:29:11.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:11.407 "traddr": "10.0.0.2", 00:29:11.407 "trsvcid": "4420" 00:29:11.407 } 00:29:11.407 }, 00:29:11.407 { 00:29:11.407 "method": "bdev_set_options", 00:29:11.407 "params": { 00:29:11.407 "bdev_auto_examine": false 00:29:11.407 } 00:29:11.407 } 00:29:11.407 ] 00:29:11.407 } 00:29:11.407 ] 00:29:11.407 }' 00:29:11.407 [2024-07-24 18:32:19.907129] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:29:11.407 [2024-07-24 18:32:19.907177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376548 ] 00:29:11.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.407 EAL: Requested device 0000:b3:01.0 cannot be used 00:29:11.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.407 EAL: Requested device 0000:b3:01.1 cannot be used 00:29:11.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.407 EAL: Requested device 0000:b3:01.2 cannot be used 00:29:11.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.407 EAL: Requested device 0000:b3:01.3 cannot be used 00:29:11.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.407 EAL: Requested device 0000:b3:01.4 cannot be used 00:29:11.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.407 EAL: Requested device 0000:b3:01.5 cannot be used 00:29:11.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.407 EAL: Requested device 0000:b3:01.6 cannot be used 00:29:11.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.407 EAL: Requested device 0000:b3:01.7 cannot be used 00:29:11.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.407 EAL: Requested device 0000:b3:02.0 cannot be used 00:29:11.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.407 EAL: Requested device 0000:b3:02.1 cannot be used 00:29:11.407 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b3:02.2 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b3:02.3 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b3:02.4 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b3:02.5 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b3:02.6 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b3:02.7 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:01.0 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:01.1 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:01.2 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:01.3 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:01.4 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:01.5 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:01.6 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:01.7 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:02.0 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:02.1 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:02.2 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:02.3 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:02.4 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:02.5 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:02.6 cannot be used 00:29:11.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:11.408 EAL: Requested device 0000:b5:02.7 cannot be used 00:29:11.408 [2024-07-24 18:32:19.997585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.667 [2024-07-24 18:32:20.074362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.927  Copying: 64/64 [kB] (average 12 MBps) 00:29:11.927 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:11.927 18:32:20 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.927 18:32:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:11.927 18:32:20 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:11.927 18:32:20 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.927 18:32:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:11.927 18:32:20 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:11.927 18:32:20 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@114 -- # update_stats 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:12.187 18:32:20 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@117 -- # : 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.mUrATa4Qlc --ib Nvme0n1 --bs 4096 --count 16 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@25 -- # local config 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:29:12.187 18:32:20 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:29:12.187 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:29:12.447 18:32:20 chaining -- bdev/chaining.sh@31 -- # config='{ 00:29:12.447 "subsystems": [ 00:29:12.447 { 00:29:12.447 "subsystem": "bdev", 00:29:12.447 "config": [ 00:29:12.447 { 00:29:12.447 "method": "bdev_nvme_attach_controller", 00:29:12.447 "params": { 00:29:12.447 "trtype": "tcp", 00:29:12.447 "adrfam": "IPv4", 00:29:12.447 "name": "Nvme0", 00:29:12.447 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:12.447 "traddr": "10.0.0.2", 00:29:12.447 "trsvcid": "4420" 00:29:12.447 } 00:29:12.447 }, 00:29:12.447 { 00:29:12.447 "method": "bdev_set_options", 00:29:12.447 "params": { 00:29:12.447 "bdev_auto_examine": false 00:29:12.447 } 00:29:12.447 } 00:29:12.447 ] 00:29:12.447 } 00:29:12.447 ] 00:29:12.447 }' 00:29:12.447 18:32:20 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.mUrATa4Qlc --ib Nvme0n1 --bs 4096 --count 16 00:29:12.447 18:32:20 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:29:12.447 "subsystems": [ 00:29:12.447 { 00:29:12.447 "subsystem": "bdev", 00:29:12.447 "config": [ 00:29:12.447 { 00:29:12.447 "method": "bdev_nvme_attach_controller", 00:29:12.447 "params": { 00:29:12.447 "trtype": "tcp", 00:29:12.447 "adrfam": "IPv4", 00:29:12.447 "name": "Nvme0", 00:29:12.447 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:12.447 "traddr": "10.0.0.2", 00:29:12.447 "trsvcid": "4420" 00:29:12.447 } 00:29:12.447 }, 00:29:12.447 { 00:29:12.447 "method": "bdev_set_options", 00:29:12.447 "params": { 00:29:12.447 "bdev_auto_examine": false 00:29:12.447 } 00:29:12.447 } 00:29:12.447 ] 00:29:12.447 } 00:29:12.447 ] 00:29:12.447 }' 00:29:12.447 [2024-07-24 18:32:20.866848] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:29:12.447 [2024-07-24 18:32:20.866896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376840 ] 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:01.0 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:01.1 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:01.2 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:01.3 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:01.4 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:01.5 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:01.6 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:01.7 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:02.0 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:02.1 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:02.2 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:02.3 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:02.4 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:02.5 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:02.6 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b3:02.7 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:01.0 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:01.1 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:01.2 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:01.3 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:01.4 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:01.5 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:01.6 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:01.7 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:02.0 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:02.1 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:02.2 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:02.3 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:02.4 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:02.5 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:02.6 cannot be used 00:29:12.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:12.447 EAL: Requested device 0000:b5:02.7 cannot be used 00:29:12.448 [2024-07-24 18:32:20.959806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.448 [2024-07-24 18:32:21.030205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.015  Copying: 64/64 [kB] (average 500 kBps) 00:29:13.015 00:29:13.015 18:32:21 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:29:13.015 18:32:21 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:13.015 18:32:21 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:13.015 18:32:21 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:13.015 18:32:21 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:13.015 18:32:21 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:13.015 18:32:21 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:29:13.015 18:32:21 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:13.015 18:32:21 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.015 18:32:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:13.015 18:32:21 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.4nsM9BHVJN /tmp/tmp.mUrATa4Qlc 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.4nsM9BHVJN /tmp/tmp.mUrATa4Qlc 00:29:13.275 18:32:21 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:29:13.275 18:32:21 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:13.275 18:32:21 chaining -- nvmf/common.sh@117 -- # sync 00:29:13.275 18:32:21 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:13.275 18:32:21 chaining -- nvmf/common.sh@120 -- # set +e 00:29:13.275 18:32:21 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:13.275 18:32:21 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:13.275 rmmod nvme_tcp 00:29:13.275 rmmod nvme_fabrics 00:29:13.275 rmmod nvme_keyring 00:29:13.275 18:32:21 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:13.275 18:32:21 chaining -- nvmf/common.sh@124 -- # set -e 00:29:13.275 18:32:21 chaining -- nvmf/common.sh@125 -- # return 0 00:29:13.275 18:32:21 chaining -- nvmf/common.sh@489 -- # '[' -n 2375854 ']' 00:29:13.275 18:32:21 chaining -- nvmf/common.sh@490 -- # killprocess 2375854 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@950 -- # '[' -z 2375854 ']' 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@954 -- # kill -0 2375854 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@955 -- # uname 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2375854 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2375854' 00:29:13.275 killing process with pid 2375854 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@969 -- # kill 2375854 00:29:13.275 18:32:21 chaining -- common/autotest_common.sh@974 -- # wait 2375854 00:29:13.534 18:32:22 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:13.534 18:32:22 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:13.534 18:32:22 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:13.534 18:32:22 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:13.534 18:32:22 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:13.534 18:32:22 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.534 18:32:22 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:13.534 18:32:22 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.067 18:32:24 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:16.067 18:32:24 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:29:16.067 18:32:24 chaining -- bdev/chaining.sh@132 -- # bperfpid=2377431 00:29:16.067 18:32:24 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:29:16.068 18:32:24 chaining -- bdev/chaining.sh@134 -- # waitforlisten 2377431 00:29:16.068 18:32:24 chaining -- common/autotest_common.sh@831 -- # '[' -z 2377431 ']' 00:29:16.068 18:32:24 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.068 18:32:24 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:16.068 18:32:24 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.068 18:32:24 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:16.068 18:32:24 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:16.068 [2024-07-24 18:32:24.180985] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:29:16.068 [2024-07-24 18:32:24.181035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377431 ] 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:01.0 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:01.1 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:01.2 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:01.3 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:01.4 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:01.5 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:01.6 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:01.7 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:02.0 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:02.1 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:02.2 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:02.3 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:02.4 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:02.5 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:02.6 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b3:02.7 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:01.0 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:01.1 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:01.2 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:01.3 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:01.4 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:01.5 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:01.6 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:01.7 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:02.0 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:02.1 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:02.2 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:02.3 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:02.4 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:02.5 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:02.6 cannot be used 00:29:16.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:16.068 EAL: Requested device 0000:b5:02.7 cannot be used 00:29:16.068 [2024-07-24 18:32:24.273068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.068 [2024-07-24 18:32:24.347014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.636 18:32:24 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:16.636 18:32:24 chaining -- common/autotest_common.sh@864 -- # return 0 00:29:16.636 18:32:24 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:29:16.636 18:32:24 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.636 18:32:24 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:16.636 malloc0 00:29:16.636 true 00:29:16.636 true 00:29:16.636 [2024-07-24 18:32:25.087965] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:16.636 crypto0 00:29:16.636 [2024-07-24 18:32:25.095987] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:29:16.636 crypto1 00:29:16.636 18:32:25 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.636 18:32:25 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:16.636 Running I/O for 5 seconds... 00:29:21.911 00:29:21.911 Latency(us) 00:29:21.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.911 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:29:21.911 Verification LBA range: start 0x0 length 0x2000 00:29:21.911 crypto1 : 5.01 18389.44 71.83 0.00 0.00 13888.46 4141.88 9909.04 00:29:21.911 =================================================================================================================== 00:29:21.911 Total : 18389.44 71.83 0.00 0.00 13888.46 4141.88 9909.04 00:29:21.911 0 00:29:21.911 18:32:30 chaining -- bdev/chaining.sh@146 -- # killprocess 2377431 00:29:21.911 18:32:30 chaining -- common/autotest_common.sh@950 -- # '[' -z 2377431 ']' 00:29:21.911 18:32:30 chaining -- common/autotest_common.sh@954 -- # kill -0 2377431 00:29:21.911 18:32:30 chaining -- common/autotest_common.sh@955 -- # uname 00:29:21.911 18:32:30 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:21.911 18:32:30 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2377431 00:29:21.911 18:32:30 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:21.911 18:32:30 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:21.911 18:32:30 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2377431' 00:29:21.911 killing process with pid 2377431 00:29:21.911 18:32:30 chaining -- common/autotest_common.sh@969 -- # kill 2377431 00:29:21.911 Received shutdown signal, test time was about 5.000000 seconds 00:29:21.912 00:29:21.912 Latency(us) 00:29:21.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.912 =================================================================================================================== 00:29:21.912 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.912 18:32:30 chaining -- common/autotest_common.sh@974 -- # wait 2377431 00:29:21.912 18:32:30 chaining -- bdev/chaining.sh@152 -- # bperfpid=2378493 00:29:21.912 18:32:30 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:29:21.912 18:32:30 chaining -- bdev/chaining.sh@154 -- # waitforlisten 2378493 00:29:21.912 18:32:30 chaining -- common/autotest_common.sh@831 -- # '[' -z 2378493 ']' 00:29:21.912 18:32:30 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.912 18:32:30 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:21.912 18:32:30 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.912 18:32:30 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:21.912 18:32:30 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:21.912 [2024-07-24 18:32:30.485777] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:29:21.912 [2024-07-24 18:32:30.485830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378493 ] 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:01.0 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:01.1 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:01.2 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:01.3 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:01.4 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:01.5 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:01.6 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:01.7 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:02.0 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:02.1 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:02.2 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:02.3 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:02.4 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:02.5 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:02.6 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b3:02.7 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:01.0 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:01.1 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:01.2 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:01.3 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:01.4 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:01.5 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:01.6 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:01.7 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:02.0 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:02.1 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:02.2 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:02.3 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:02.4 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:02.5 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:02.6 cannot be used 00:29:22.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.171 EAL: Requested device 0000:b5:02.7 cannot be used 00:29:22.171 [2024-07-24 18:32:30.577693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.171 [2024-07-24 18:32:30.651447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.739 18:32:31 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:22.739 18:32:31 chaining -- common/autotest_common.sh@864 -- # return 0 00:29:22.739 18:32:31 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:29:22.739 18:32:31 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.739 18:32:31 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:22.998 malloc0 00:29:22.998 true 00:29:22.998 true 00:29:22.998 [2024-07-24 18:32:31.408543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:29:22.998 [2024-07-24 18:32:31.408583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:22.998 [2024-07-24 18:32:31.408597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b2ae90 00:29:22.998 [2024-07-24 18:32:31.408606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:22.998 [2024-07-24 18:32:31.409362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:22.998 [2024-07-24 18:32:31.409383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:29:22.998 pt0 00:29:22.998 [2024-07-24 18:32:31.416571] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:22.998 crypto0 00:29:22.998 [2024-07-24 18:32:31.424589] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:29:22.998 crypto1 00:29:22.998 18:32:31 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.998 18:32:31 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:22.998 Running I/O for 5 seconds... 00:29:28.272 00:29:28.272 Latency(us) 00:29:28.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.272 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:29:28.272 Verification LBA range: start 0x0 length 0x2000 00:29:28.272 crypto1 : 5.01 14254.42 55.68 0.00 0.00 17920.89 4194.30 11272.19 00:29:28.272 =================================================================================================================== 00:29:28.272 Total : 14254.42 55.68 0.00 0.00 17920.89 4194.30 11272.19 00:29:28.272 0 00:29:28.272 18:32:36 chaining -- bdev/chaining.sh@167 -- # killprocess 2378493 00:29:28.272 18:32:36 chaining -- common/autotest_common.sh@950 -- # '[' -z 2378493 ']' 00:29:28.272 18:32:36 chaining -- common/autotest_common.sh@954 -- # kill -0 2378493 00:29:28.272 18:32:36 chaining -- common/autotest_common.sh@955 -- # uname 00:29:28.272 18:32:36 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:28.272 18:32:36 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2378493 00:29:28.272 18:32:36 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:28.272 18:32:36 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:28.272 18:32:36 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2378493' 00:29:28.272 killing process with pid 2378493 00:29:28.272 18:32:36 chaining -- common/autotest_common.sh@969 -- # kill 2378493 00:29:28.272 Received shutdown signal, test time was about 5.000000 seconds 00:29:28.272 00:29:28.272 Latency(us) 00:29:28.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.272 =================================================================================================================== 00:29:28.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.273 18:32:36 chaining -- common/autotest_common.sh@974 -- # wait 2378493 00:29:28.273 18:32:36 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:29:28.273 18:32:36 chaining -- bdev/chaining.sh@170 -- # killprocess 2378493 00:29:28.273 18:32:36 chaining -- common/autotest_common.sh@950 -- # '[' -z 2378493 ']' 00:29:28.273 18:32:36 chaining -- common/autotest_common.sh@954 -- # kill -0 2378493 00:29:28.273 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2378493) - No such process 00:29:28.273 18:32:36 chaining -- common/autotest_common.sh@977 -- # echo 'Process with pid 2378493 is not found' 00:29:28.273 Process with pid 2378493 is not found 00:29:28.273 18:32:36 chaining -- bdev/chaining.sh@171 -- # wait 2378493 00:29:28.273 18:32:36 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.273 18:32:36 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:28.273 18:32:36 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:29:28.273 18:32:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@296 -- # e810=() 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@297 -- # x722=() 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@298 -- # mlx=() 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:b7:00.0 (0x8086 - 0x159b)' 00:29:28.273 Found 0000:b7:00.0 (0x8086 - 0x159b) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:b7:00.1 (0x8086 - 0x159b)' 00:29:28.273 Found 0000:b7:00.1 (0x8086 - 0x159b) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:b7:00.0: cvl_0_0' 00:29:28.273 Found net devices under 0000:b7:00.0: cvl_0_0 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:b7:00.1: cvl_0_1' 00:29:28.273 Found net devices under 0000:b7:00.1: cvl_0_1 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.273 18:32:36 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.532 18:32:36 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.532 18:32:36 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.532 18:32:36 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:28.532 18:32:36 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.532 18:32:37 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.532 18:32:37 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.532 18:32:37 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:28.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:29:28.532 00:29:28.532 --- 10.0.0.2 ping statistics --- 00:29:28.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.532 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:29:28.532 18:32:37 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:29:28.532 00:29:28.532 --- 10.0.0.1 ping statistics --- 00:29:28.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.532 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:29:28.532 18:32:37 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.532 18:32:37 chaining -- nvmf/common.sh@422 -- # return 0 00:29:28.532 18:32:37 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:28.532 18:32:37 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.532 18:32:37 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:28.532 18:32:37 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:28.532 18:32:37 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.532 18:32:37 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:28.532 18:32:37 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:28.793 18:32:37 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:29:28.793 18:32:37 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:28.793 18:32:37 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:28.793 18:32:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:28.793 18:32:37 chaining -- nvmf/common.sh@481 -- # nvmfpid=2379596 00:29:28.793 18:32:37 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:28.793 18:32:37 chaining -- nvmf/common.sh@482 -- # waitforlisten 2379596 00:29:28.793 18:32:37 chaining -- common/autotest_common.sh@831 -- # '[' -z 2379596 ']' 00:29:28.793 18:32:37 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.793 18:32:37 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:28.793 18:32:37 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.793 18:32:37 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:28.793 18:32:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:28.793 [2024-07-24 18:32:37.214666] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:29:28.793 [2024-07-24 18:32:37.214714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:01.0 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:01.1 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:01.2 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:01.3 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:01.4 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:01.5 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:01.6 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:01.7 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:02.0 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:02.1 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:02.2 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:02.3 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:02.4 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:02.5 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:02.6 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b3:02.7 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:01.0 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:01.1 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:01.2 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:01.3 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:01.4 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:01.5 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:01.6 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:01.7 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:02.0 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:02.1 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:02.2 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:02.3 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:02.4 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:02.5 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:02.6 cannot be used 00:29:28.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:28.793 EAL: Requested device 0000:b5:02.7 cannot be used 00:29:28.793 [2024-07-24 18:32:37.311797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.793 [2024-07-24 18:32:37.383124] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.793 [2024-07-24 18:32:37.383164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.793 [2024-07-24 18:32:37.383174] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.794 [2024-07-24 18:32:37.383182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.794 [2024-07-24 18:32:37.383189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.794 [2024-07-24 18:32:37.383209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.423 18:32:38 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:29.423 18:32:38 chaining -- common/autotest_common.sh@864 -- # return 0 00:29:29.423 18:32:38 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:29.423 18:32:38 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:29.423 18:32:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:29.681 18:32:38 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.681 18:32:38 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:29:29.681 18:32:38 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.681 18:32:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:29.681 malloc0 00:29:29.681 [2024-07-24 18:32:38.070610] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.681 [2024-07-24 18:32:38.086738] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.681 18:32:38 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.681 18:32:38 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:29:29.681 18:32:38 chaining -- bdev/chaining.sh@189 -- # bperfpid=2379797 00:29:29.681 18:32:38 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:29:29.681 18:32:38 chaining -- bdev/chaining.sh@191 -- # waitforlisten 2379797 /var/tmp/bperf.sock 00:29:29.681 18:32:38 chaining -- common/autotest_common.sh@831 -- # '[' -z 2379797 ']' 00:29:29.681 18:32:38 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:29.681 18:32:38 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:29.681 18:32:38 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:29.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:29.681 18:32:38 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:29.681 18:32:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:29.681 [2024-07-24 18:32:38.150527] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:29:29.681 [2024-07-24 18:32:38.150572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379797 ] 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:01.0 cannot be used 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:01.1 cannot be used 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:01.2 cannot be used 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:01.3 cannot be used 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:01.4 cannot be used 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:01.5 cannot be used 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:01.6 cannot be used 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:01.7 cannot be used 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:02.0 cannot be used 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:02.1 cannot be used 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:02.2 cannot be used 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:02.3 cannot be used 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:02.4 cannot be used 00:29:29.681 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.681 EAL: Requested device 0000:b3:02.5 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b3:02.6 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b3:02.7 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:01.0 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:01.1 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:01.2 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:01.3 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:01.4 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:01.5 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:01.6 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:01.7 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:02.0 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:02.1 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:02.2 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:02.3 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:02.4 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:02.5 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:02.6 cannot be used 00:29:29.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:29.682 EAL: Requested device 0000:b5:02.7 cannot be used 00:29:29.682 [2024-07-24 18:32:38.244028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.941 [2024-07-24 18:32:38.318430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.508 18:32:38 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:30.508 18:32:38 chaining -- common/autotest_common.sh@864 -- # return 0 00:29:30.508 18:32:38 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:29:30.508 18:32:38 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:29:30.767 [2024-07-24 18:32:39.280933] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:30.767 nvme0n1 00:29:30.767 true 00:29:30.767 crypto0 00:29:30.767 18:32:39 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:31.026 Running I/O for 5 seconds... 00:29:36.300 00:29:36.300 Latency(us) 00:29:36.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.300 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:29:36.300 Verification LBA range: start 0x0 length 0x2000 00:29:36.300 crypto0 : 5.01 13415.54 52.40 0.00 0.00 19034.37 2306.87 16462.64 00:29:36.300 =================================================================================================================== 00:29:36.300 Total : 13415.54 52.40 0.00 0.00 19034.37 2306.87 16462.64 00:29:36.300 0 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@205 -- # sequence=134532 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@206 -- # encrypt=67266 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:36.300 18:32:44 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:36.559 18:32:44 chaining -- bdev/chaining.sh@207 -- # decrypt=67266 00:29:36.559 18:32:44 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:29:36.559 18:32:44 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:29:36.559 18:32:44 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:36.559 18:32:44 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:36.559 18:32:44 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:29:36.559 18:32:44 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:36.559 18:32:44 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:29:36.559 18:32:44 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:36.559 18:32:44 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:29:36.559 18:32:44 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:36.818 18:32:45 chaining -- bdev/chaining.sh@208 -- # crc32c=134532 00:29:36.818 18:32:45 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:29:36.818 18:32:45 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:29:36.818 18:32:45 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:29:36.818 18:32:45 chaining -- bdev/chaining.sh@214 -- # killprocess 2379797 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@950 -- # '[' -z 2379797 ']' 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@954 -- # kill -0 2379797 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@955 -- # uname 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2379797 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2379797' 00:29:36.819 killing process with pid 2379797 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@969 -- # kill 2379797 00:29:36.819 Received shutdown signal, test time was about 5.000000 seconds 00:29:36.819 00:29:36.819 Latency(us) 00:29:36.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.819 =================================================================================================================== 00:29:36.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@974 -- # wait 2379797 00:29:36.819 18:32:45 chaining -- bdev/chaining.sh@219 -- # bperfpid=2380966 00:29:36.819 18:32:45 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:29:36.819 18:32:45 chaining -- bdev/chaining.sh@221 -- # waitforlisten 2380966 /var/tmp/bperf.sock 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@831 -- # '[' -z 2380966 ']' 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:36.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:36.819 18:32:45 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:37.078 [2024-07-24 18:32:45.446346] Starting SPDK v24.09-pre git sha1 23a081919 / DPDK 24.03.0 initialization... 00:29:37.078 [2024-07-24 18:32:45.446397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380966 ] 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:01.0 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:01.1 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:01.2 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:01.3 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:01.4 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:01.5 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:01.6 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:01.7 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:02.0 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:02.1 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:02.2 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:02.3 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:02.4 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:02.5 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:02.6 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b3:02.7 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:01.0 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:01.1 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:01.2 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:01.3 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:01.4 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:01.5 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:01.6 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:01.7 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:02.0 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:02.1 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:02.2 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:02.3 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:02.4 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.078 EAL: Requested device 0000:b5:02.5 cannot be used 00:29:37.078 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.079 EAL: Requested device 0000:b5:02.6 cannot be used 00:29:37.079 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:37.079 EAL: Requested device 0000:b5:02.7 cannot be used 00:29:37.079 [2024-07-24 18:32:45.540686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.079 [2024-07-24 18:32:45.615075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.016 18:32:46 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:38.016 18:32:46 chaining -- common/autotest_common.sh@864 -- # return 0 00:29:38.016 18:32:46 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:29:38.016 18:32:46 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:29:38.016 [2024-07-24 18:32:46.574992] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:38.016 nvme0n1 00:29:38.016 true 00:29:38.016 crypto0 00:29:38.016 18:32:46 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:38.275 Running I/O for 5 seconds... 00:29:43.549 00:29:43.549 Latency(us) 00:29:43.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.549 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:29:43.549 Verification LBA range: start 0x0 length 0x200 00:29:43.549 crypto0 : 5.00 2553.91 159.62 0.00 0.00 12292.43 606.21 13316.92 00:29:43.549 =================================================================================================================== 00:29:43.549 Total : 2553.91 159.62 0.00 0.00 12292.43 606.21 13316.92 00:29:43.549 0 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@233 -- # sequence=25560 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:43.549 18:32:51 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:43.549 18:32:52 chaining -- bdev/chaining.sh@234 -- # encrypt=12780 00:29:43.549 18:32:52 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:29:43.549 18:32:52 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:29:43.549 18:32:52 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:43.549 18:32:52 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:43.549 18:32:52 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:43.549 18:32:52 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:43.549 18:32:52 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:43.549 18:32:52 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:43.549 18:32:52 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:43.549 18:32:52 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:43.808 18:32:52 chaining -- bdev/chaining.sh@235 -- # decrypt=12780 00:29:43.808 18:32:52 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:29:43.808 18:32:52 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:29:43.808 18:32:52 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:43.808 18:32:52 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:43.808 18:32:52 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:29:43.808 18:32:52 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:43.808 18:32:52 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:29:43.808 18:32:52 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:29:43.808 18:32:52 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:43.808 18:32:52 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:44.068 18:32:52 chaining -- bdev/chaining.sh@236 -- # crc32c=25560 00:29:44.068 18:32:52 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:29:44.068 18:32:52 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:29:44.068 18:32:52 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:29:44.068 18:32:52 chaining -- bdev/chaining.sh@242 -- # killprocess 2380966 00:29:44.068 18:32:52 chaining -- common/autotest_common.sh@950 -- # '[' -z 2380966 ']' 00:29:44.068 18:32:52 chaining -- common/autotest_common.sh@954 -- # kill -0 2380966 00:29:44.068 18:32:52 chaining -- common/autotest_common.sh@955 -- # uname 00:29:44.068 18:32:52 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:44.068 18:32:52 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2380966 00:29:44.068 18:32:52 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:44.068 18:32:52 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:44.068 18:32:52 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2380966' 00:29:44.068 killing process with pid 2380966 00:29:44.068 18:32:52 chaining -- common/autotest_common.sh@969 -- # kill 2380966 00:29:44.068 Received shutdown signal, test time was about 5.000000 seconds 00:29:44.068 00:29:44.068 Latency(us) 00:29:44.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.068 =================================================================================================================== 00:29:44.068 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.068 18:32:52 chaining -- common/autotest_common.sh@974 -- # wait 2380966 00:29:44.328 18:32:52 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:29:44.328 18:32:52 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:44.328 18:32:52 chaining -- nvmf/common.sh@117 -- # sync 00:29:44.328 18:32:52 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:44.328 18:32:52 chaining -- nvmf/common.sh@120 -- # set +e 00:29:44.328 18:32:52 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:44.328 18:32:52 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:44.328 rmmod nvme_tcp 00:29:44.328 rmmod nvme_fabrics 00:29:44.328 rmmod nvme_keyring 00:29:44.328 18:32:52 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:44.328 18:32:52 chaining -- nvmf/common.sh@124 -- # set -e 00:29:44.328 18:32:52 chaining -- nvmf/common.sh@125 -- # return 0 00:29:44.328 18:32:52 chaining -- nvmf/common.sh@489 -- # '[' -n 2379596 ']' 00:29:44.328 18:32:52 chaining -- nvmf/common.sh@490 -- # killprocess 2379596 00:29:44.328 18:32:52 chaining -- common/autotest_common.sh@950 -- # '[' -z 2379596 ']' 00:29:44.328 18:32:52 chaining -- common/autotest_common.sh@954 -- # kill -0 2379596 00:29:44.328 18:32:52 chaining -- common/autotest_common.sh@955 -- # uname 00:29:44.328 18:32:52 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:44.328 18:32:52 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2379596 00:29:44.328 18:32:52 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:44.328 18:32:52 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:44.328 18:32:52 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2379596' 00:29:44.328 killing process with pid 2379596 00:29:44.328 18:32:52 chaining -- common/autotest_common.sh@969 -- # kill 2379596 00:29:44.328 18:32:52 chaining -- common/autotest_common.sh@974 -- # wait 2379596 00:29:44.587 18:32:53 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:44.587 18:32:53 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:44.587 18:32:53 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:44.587 18:32:53 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:44.587 18:32:53 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:44.587 18:32:53 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.587 18:32:53 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:44.587 18:32:53 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.492 18:32:55 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:46.492 18:32:55 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:29:46.492 00:29:46.492 real 0m47.878s 00:29:46.492 user 0m55.272s 00:29:46.492 sys 0m13.074s 00:29:46.492 18:32:55 chaining -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:46.492 18:32:55 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:46.492 ************************************ 00:29:46.492 END TEST chaining 00:29:46.492 ************************************ 00:29:46.750 18:32:55 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:46.750 18:32:55 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:46.750 18:32:55 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:46.750 18:32:55 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:29:46.750 18:32:55 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:29:46.750 18:32:55 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:29:46.750 18:32:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:46.750 18:32:55 -- common/autotest_common.sh@10 -- # set +x 00:29:46.750 18:32:55 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:29:46.750 18:32:55 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:46.750 18:32:55 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:46.750 18:32:55 -- common/autotest_common.sh@10 -- # set +x 00:29:53.313 INFO: APP EXITING 00:29:53.313 INFO: killing all VMs 00:29:53.313 INFO: killing vhost app 00:29:53.313 WARN: no vhost pid file found 00:29:53.313 INFO: EXIT DONE 00:29:56.696 Waiting for block devices as requested 00:29:56.696 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:56.696 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:56.696 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:56.696 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:56.696 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:56.696 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:56.696 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:56.696 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:56.987 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:56.987 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:56.987 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:57.246 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:57.246 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:57.246 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:57.505 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:57.505 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:57.505 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:02.782 Cleaning 00:30:02.782 Removing: /var/run/dpdk/spdk0/config 00:30:02.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:02.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:02.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:02.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:02.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:02.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:02.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:02.782 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:02.782 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:02.782 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:02.782 Removing: /dev/shm/nvmf_trace.0 00:30:02.782 Removing: /dev/shm/spdk_tgt_trace.pid2102498 00:30:02.782 Removing: /var/run/dpdk/spdk0 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2097241 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2100998 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2102498 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2103052 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2104042 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2104321 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2105517 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2105573 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2105944 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2109690 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2111586 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2111950 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2112299 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2112678 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2112997 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2113233 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2113425 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2113703 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2114616 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2117664 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2117951 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2118277 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2118575 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2118617 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2118922 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2119201 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2119484 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2119746 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2119992 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2120243 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2120497 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2120738 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2120975 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2121241 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2121525 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2121804 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2122091 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2122374 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2122668 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2122948 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2123235 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2123521 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2123811 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2124091 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2124369 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2124660 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2124952 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2125269 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2125745 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2126081 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2126378 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2126689 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2127153 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2127280 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2127623 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2128139 00:30:02.782 Removing: /var/run/dpdk/spdk_pid2128561 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2128636 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2132831 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2134995 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2137036 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2138134 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2139491 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2139905 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2140106 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2140173 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2145487 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2146057 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2147391 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2147681 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2153249 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2154865 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2155956 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2160042 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2161829 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2162736 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2167032 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2169352 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2170390 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2180529 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2182686 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2183642 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2193368 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2195522 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2196536 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2206137 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2209518 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2210649 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2221935 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2224474 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2225565 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2236213 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2238683 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2239851 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2251214 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2255008 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2256118 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2257276 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2260463 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2265988 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2268679 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2273643 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2277227 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2283013 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2286373 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2293355 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2295671 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2302014 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2304439 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2310751 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2313152 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2317795 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2318336 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2319034 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2319563 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2320150 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2320797 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2321731 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2322101 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2324252 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2326405 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2328487 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2330235 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2332249 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2334289 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2336448 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2338115 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2338838 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2339304 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2341604 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2344001 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2346411 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2347756 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2349224 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2350031 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2350201 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2350340 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2350840 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2351105 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2352261 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2354205 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2356133 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2357206 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2358222 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2358550 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2358604 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2358625 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2359756 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2360528 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2361009 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2363233 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2365692 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2368037 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2369381 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2370741 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2371539 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2371565 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2375992 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2376228 00:30:02.783 Removing: /var/run/dpdk/spdk_pid2376456 00:30:03.042 Removing: /var/run/dpdk/spdk_pid2376548 00:30:03.042 Removing: /var/run/dpdk/spdk_pid2376840 00:30:03.042 Removing: /var/run/dpdk/spdk_pid2377431 00:30:03.042 Removing: /var/run/dpdk/spdk_pid2378493 00:30:03.042 Removing: /var/run/dpdk/spdk_pid2379797 00:30:03.042 Removing: /var/run/dpdk/spdk_pid2380966 00:30:03.042 Clean 00:30:03.042 18:33:11 -- common/autotest_common.sh@1451 -- # return 0 00:30:03.042 18:33:11 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:30:03.042 18:33:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:03.042 18:33:11 -- common/autotest_common.sh@10 -- # set +x 00:30:03.042 18:33:11 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:30:03.042 18:33:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:03.042 18:33:11 -- common/autotest_common.sh@10 -- # set +x 00:30:03.042 18:33:11 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:30:03.042 18:33:11 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:30:03.042 18:33:11 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:30:03.042 18:33:11 -- spdk/autotest.sh@395 -- # hash lcov 00:30:03.042 18:33:11 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:03.042 18:33:11 -- spdk/autotest.sh@397 -- # hostname 00:30:03.042 18:33:11 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:30:03.301 geninfo: WARNING: invalid characters removed from testname! 00:30:25.240 18:33:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:25.240 18:33:33 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:26.614 18:33:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:28.516 18:33:36 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:29.893 18:33:38 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:31.797 18:33:39 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:33.175 18:33:41 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:33.176 18:33:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:30:33.176 18:33:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:33.176 18:33:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.176 18:33:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.176 18:33:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.176 18:33:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.176 18:33:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.176 18:33:41 -- paths/export.sh@5 -- $ export PATH 00:30:33.176 18:33:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.176 18:33:41 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:33.176 18:33:41 -- common/autobuild_common.sh@447 -- $ date +%s 00:30:33.176 18:33:41 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721838821.XXXXXX 00:30:33.176 18:33:41 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721838821.Wvjydu 00:30:33.176 18:33:41 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:30:33.176 18:33:41 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:30:33.176 18:33:41 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:30:33.176 18:33:41 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:33.176 18:33:41 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:33.176 18:33:41 -- common/autobuild_common.sh@463 -- $ get_config_params 00:30:33.176 18:33:41 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:30:33.176 18:33:41 -- common/autotest_common.sh@10 -- $ set +x 00:30:33.176 18:33:41 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:30:33.176 18:33:41 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:30:33.176 18:33:41 -- pm/common@17 -- $ local monitor 00:30:33.176 18:33:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:33.176 18:33:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:33.176 18:33:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:33.176 18:33:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:33.176 18:33:41 -- pm/common@25 -- $ sleep 1 00:30:33.176 18:33:41 -- pm/common@21 -- $ date +%s 00:30:33.176 18:33:41 -- pm/common@21 -- $ date +%s 00:30:33.176 18:33:41 -- pm/common@21 -- $ date +%s 00:30:33.176 18:33:41 -- pm/common@21 -- $ date +%s 00:30:33.176 18:33:41 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721838821 00:30:33.176 18:33:41 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721838821 00:30:33.176 18:33:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721838821 00:30:33.176 18:33:41 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721838821 00:30:33.435 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721838821_collect-vmstat.pm.log 00:30:33.435 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721838821_collect-cpu-load.pm.log 00:30:33.435 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721838821_collect-cpu-temp.pm.log 00:30:33.436 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721838821_collect-bmc-pm.bmc.pm.log 00:30:34.425 18:33:42 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:30:34.425 18:33:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:30:34.425 18:33:42 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:34.425 18:33:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:34.425 18:33:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:34.425 18:33:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:34.425 18:33:42 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:34.425 18:33:42 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:34.425 18:33:42 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:30:34.425 18:33:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:34.425 18:33:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:34.425 18:33:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:34.425 18:33:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:34.425 18:33:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:34.425 18:33:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:34.425 18:33:42 -- pm/common@44 -- $ pid=2396602 00:30:34.425 18:33:42 -- pm/common@50 -- $ kill -TERM 2396602 00:30:34.425 18:33:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:34.425 18:33:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:34.425 18:33:42 -- pm/common@44 -- $ pid=2396603 00:30:34.425 18:33:42 -- pm/common@50 -- $ kill -TERM 2396603 00:30:34.425 18:33:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:34.425 18:33:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:34.425 18:33:42 -- pm/common@44 -- $ pid=2396604 00:30:34.425 18:33:42 -- pm/common@50 -- $ kill -TERM 2396604 00:30:34.425 18:33:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:34.425 18:33:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:34.425 18:33:42 -- pm/common@44 -- $ pid=2396624 00:30:34.425 18:33:42 -- pm/common@50 -- $ sudo -E kill -TERM 2396624 00:30:34.425 + [[ -n 1972439 ]] 00:30:34.425 + sudo kill 1972439 00:30:34.446 [Pipeline] } 00:30:34.465 [Pipeline] // stage 00:30:34.470 [Pipeline] } 00:30:34.489 [Pipeline] // timeout 00:30:34.495 [Pipeline] } 00:30:34.514 [Pipeline] // catchError 00:30:34.519 [Pipeline] } 00:30:34.538 [Pipeline] // wrap 00:30:34.544 [Pipeline] } 00:30:34.561 [Pipeline] // catchError 00:30:34.571 [Pipeline] stage 00:30:34.573 [Pipeline] { (Epilogue) 00:30:34.587 [Pipeline] catchError 00:30:34.589 [Pipeline] { 00:30:34.604 [Pipeline] echo 00:30:34.605 Cleanup processes 00:30:34.611 [Pipeline] sh 00:30:34.910 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:34.910 2396707 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:30:34.910 2397051 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:34.924 [Pipeline] sh 00:30:35.209 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:35.209 ++ grep -v 'sudo pgrep' 00:30:35.209 ++ awk '{print $1}' 00:30:35.209 + sudo kill -9 2396707 00:30:35.220 [Pipeline] sh 00:30:35.502 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:35.502 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:30:39.696 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:30:43.903 [Pipeline] sh 00:30:44.187 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:44.187 Artifacts sizes are good 00:30:44.201 [Pipeline] archiveArtifacts 00:30:44.207 Archiving artifacts 00:30:44.341 [Pipeline] sh 00:30:44.628 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:30:44.643 [Pipeline] cleanWs 00:30:44.654 [WS-CLEANUP] Deleting project workspace... 00:30:44.654 [WS-CLEANUP] Deferred wipeout is used... 00:30:44.661 [WS-CLEANUP] done 00:30:44.663 [Pipeline] } 00:30:44.686 [Pipeline] // catchError 00:30:44.699 [Pipeline] sh 00:30:44.980 + logger -p user.info -t JENKINS-CI 00:30:44.987 [Pipeline] } 00:30:45.001 [Pipeline] // stage 00:30:45.004 [Pipeline] } 00:30:45.017 [Pipeline] // node 00:30:45.022 [Pipeline] End of Pipeline 00:30:45.056 Finished: SUCCESS